GEOS3104-3804: GEOPHYSICAL METHODS Geodynamic modelling using Ellipsis
Patrice F. Rey GEOS-3104 Computational Geodynamics
Computational tectonics/geodynamics provides a robust plat- form for hypotheses testing and exploring coupled tectonic and geodynamic processes. It delivers the unexpected by re- vealing previously unknown behaviors.
1 SECTION 1 The rise of computer science
During the mid-19th century, the laws of thermody- tle convection, and plate tectonic processes which also namics - which describe the relation between heat and involves the flow - brittle or ductile - of rocks. 21st cen- forces between contiguous bodies - and fluid dynam- tury computers have now the capability to compute in ics - which describes the flow of fluids in relation to four dimensions the flow of complex fluid with com- pressure gradients - reached maturity. Both theories plex rheologies and contrasting physical properties we underpin our understanding of many natural proc- typically encounter on Earth. For geosciences, this is a esses from atmospheric and oceanic circulation, man- significant shift.
2 Computational tectonics and geodynamics: There is a revolution Navier-Stokes equations: Written in Georges G. Stokes (1819-1903) unfolding at the moment in all corners of science, engineering, and the 1850’s, the Navier-Stokes equations other disciplines like economy and social sciences. This revolution are the set of differential equations that is driven by the growing availability of increasingly powerful describe the motion of fluid and associ- high-performance computers (HPC), and the growing availability ated heat exchanges. They relate veloc- of free global datasets and open source data processing tools (Py- ity gradients to pressure gradients and thon, R, Paraview, QGIS-GRASS, etc). Here in Australia, supercom- can be analytically solved only when puters such as Magnus (36,000 cores) at the Pawsey Supercomput- considering steady laminar flows. ing Center in Perth, and Raijin (58,000 cores) at NCI (National Com- putational Infrastructure) in Canberra, have enabled and democra- tised the art of numerical modelling.
Broadly speaking, numerical modelling is a discipline which en- ables the exploration of the behavior of complex systems. It gives scientists, engineers and economists the capacity to extract new knowledge and understanding from big data. In the context of geol- ogy and geophysics for instance, numerical modelling allows us to build a model of lithosphere and explore how this lithosphere be- haves when submitted to tectonic forces. It also enables geoscien- tists to build spherical models of the Earth to explore mantle con- vection. These models can acount for a very broad range of petro- physical properties including radiogenic heat, heat diffusivity, den- sity, heat capacity, rheology (brittle and ductile), solidus and liqui- dus, etc. The behavior of the lithosphere and that of the convective mantle is governed by the laws of thermodynamics which de- scribes exchange of energies within the system, and fluid dynamics which relates deformation (i.e. flow) to pressure gradients. Both thermodynamics and fluid dynamics are fully developed theories from 19th century physics. Yet, it is only over the past decade that HPC became powerful enough to efficiently solve in 3D the rele- vant equations at a reasonable spatial and temporal resolution.
3 Some examples of simple laminar flow for which analytical solu- of their time. In short, in the second half of the 20th century com- tions exist include the problem free fall of a spherical object into a puters were not powerful enough to take advantage of these new newtonian fluid (the settling of crystal into a magma), the flow in- numerical methods. As com- duced by the motion of a rigid plate puters grew in power, so did above a newtonian fluid (Couette flow the complexity of fluid flow in relation to the motion of tectonic plate problems that one can tackle. above the asthenosphere), and the flow However, for quite some time, of newtonian fluid between two static computational tectonics in- plates (Poiseuille flow, for instance the volving a layered lithosphere 1950’s mainframe computers flow of the lower crust in orogenic pla- made of stronger (upper crust teaux). and upper mantle) and weaker layers (lower crust and lower litho- spheric mantle) was limited to 2 dimensional models in which the node of the computational grid had to follow the model deforma- tion (Eulerian grid). Computational geodynamics could afford 3D models because models of mantle convection could make use of a more efficient fixed grid (Lagrangian grid) as long as the convec- tive mantle was made of one single newtonian fluid.
One hundred years later (1950’s), with the advent of mainframe computers, the Navier-Stokes equations could be discretised and solved at the nodes of a numerical grid to explore the type of com- plex, time-dependent fluid flow we encounter in nature. At the @ Yuen, Minneapolis same time, numerical methods and computer algorithms were pro- @ Beaumont, Halifax gressing so fast that they quickly overtook the computer capability
4 Particle-in-cell numerical methods. In the late 1990’s, computers Over the past decade, the growing availability of powerful high- become powerful enough to allow the implementation of a 1950’s performance computers has unleashed the power of computer- numerical method called particle-in-cell (PIC, developed at Los Ala- based modelling in all branches of science. Geoscientists are now mos National Laboratory) in which individual fluid elements, car- able to simulate in four dimensions, using realistic coupled ther- rying material properties and flow history, are advected through a mal and mechanical properties, lithospheric-scale deformation and fixed computational grid. For geologists, this progress meant that mantle geodynamics. 21st century HPC have finally caught-up with 19th century physics, and computer science from the mid-50s.
Numerical experiments vs numerical modelling vs numerical simulation: Before going further we need to clarify the difference between experiment, modelling and simulation. For most people these concepts are interchangeable, but not for the experts.
Numerical experiments PIC = Eulerian mesh + Lagrangian particles (or a physical experi- one could simulate the deformation of mechanically layered sys- ment) do not pretend to tems such as the Earth’s lithosphere; albeit in two dimensions. El- reproduce a natural lipsis, the code we will use in GEOS3104, is one of the earliest and process in a realistic most robust codes (along with Citcom) implementing an efficient manner. The aim fo- PIC method on the back of a robust multigrid solver. cusses on trying to illus- trate a concept, or try- ing to understand the few most important pa- rameters involved in a particular process. The famous analogue experiment from Paul Tapponnier (performed for the first time in Rey, Coltice, Flament, the very late 70’s) perfectly illustrates the concept of escape tecton- Nature 2014 ics, a process which accommodates convergence via the lateral ex- pulsion of continental blocks in front of a rigid indenter. Clearly, We have used Ellipsis to show that early proto-continent and thick oceanic plateaux this experiment bears very little resemblance with tectonics in Asia had enough gravitational power to slowly force adjacent oceanic lithospheres to sub- and South East Asia, but it does a nice job in illustrating a concept duct. The model above is 700 km x 2800 km, include continental crust (red), litho- spheric mantle (pink), partially molten mantle (bright blue), the rest is the mantle. which has changed our understanding of collisional tectonics.
5 Numerical modelling aims at under- simulations, in the context of geosciences, assumes that we are able standing a particular process within to implement a very broad range of coupled natural mechanical, a particular tectonic context. The ini- petrological, geochemical processes including the migration of par- tial and boundary conditions are tial melt and the associated advection of heat, the release and con- carefully thought through to describe sumption of water and other volatiles, evolving anisotropic rheolo- a geologically realistic setting. Re- gies due to grain reduction, dilatancy due to micro-cracking and sults of the modelling are detailed other softening mechanisms. It will take decades before we can enough to be able to be compared - to properly engage with numerical simulation in a meaningful man- the first order - to natural ner. geological exam- ples, without Multi-grid solver: The Finite Element Method (FEM) con- trying to match accu- sists in discretizing of a set of partial differential equations rately any- across a 2D or 3D domain (i.e. ∂# are transformed into ∆ # ). one of them. This allows solving efficiently the Navier-Stokes equations at the nodes of a grid covering the model, but at the price of a small error. To minimize this error, users can use a very fine grid. The finer the grid the higher the resolution but the longer the compute time. Hence, the need to balance accu- racy and compute time. To solve the set of equations efficiently, computer scientists Numerical simulations aim at reproducing with the greatest level design methodologies taking advantage of parallel comput- of detail and the greatest level of realism a particular process on a ing, as well as other techniques. One involves the use of a particular region. For most people this is what modelling is all stack of computational grids of increasing resolution. In- about, and therefore they are quick to point to the many shortcom- stead of solving the set of equations directly at the node of ings of Paul Tapponnier’s experiments and dismiss its relevance one single fine grid, a rough solution is computed on a since it neither account for the presence of the highest mountain coarser grid, and this solution is iteratively refined at the chain nor the highest plateau on Earth. Numerical simulation as- nodes of grids of increasing resolution. Ellipsis uses a multi- sumes that our models are able to include parameters with realistic grid solver. value distributions and time-dependencies, as well as nested het- erogeneities on a broad range of scales. In addition, numerical
6 SECTION 2 Ellipsis: FEM-PIC code
Ellipsis solves the Navier-Stokes equations at the nodes of a the acronym PIC: Particle In Cell). In this section, you will computational grid using a Finite Element Methods (FEM). learn to compile Ellipsis on your own Mac or Linux computer, Solving the Navier-Stokes equations provides the velocity and you will learn the command to launch and stop Ellipsis fields from which a range of other field properties can be com- from a Terminal window. Some basic Unix skills are neces- puted (pressure, viscosity, stress etc). Lithologies are repre- sary, but involve knowing a dozen Unix commands at most. sented by particles distributed inside the grid’s cells (hence
7 Introduction: Ellipsis is a child of CitcomS, a code dedicated to mantle convection modelling. Ellipsis has a long his- tory of development starting in the late 90‘s at Caltech by Louis Moresi (now Melbourne University). It is designed to explore 2 dimensional coupled thermal and mechanical tectonic processes. It can simulate both brittle and ductile deformation where brittle rheologies evolve with accumulated strain, and ductile rheologies evolve with tempera- ture, stress, strain rate and melt fraction. Although more powerful parallel open source codes exist (e.g. GALE, Un- derworld), their installation require a higher level of computational expertise, and access to high performance com- puters. The main advantages of Ellipsis is that its installation is “relatively” simple, it can run on a laptop computer, and it is powerful enough to produce innovative research outcomes publishable in Nature.
Installation: Ellipsis is a free open source code that runs on unix-based computer operating systems (e.g. Linux, Mac OSX, etc). The source code (a bunch of files written in C) can be downloaded from the GeoFramework site at the California Institute of Technology. To compile Ellipsis source code on your machine you need access to an appro- priate gcc compiler and a Terminal window. On Mac, the Terminal window comes with the Xcode.app (free from the Mac App store). If you already have access to a Terminal check if you have an appropriate compiler: open a Terminal window and execute the following command: gcc --version. If no gcc compiler is installed you need to install one. Go to http://hpc.sourceforge.net/ and download the gcc compiler for your OS, then open a Terminal window and execute: sudo tar -xvf /path_to/gcc-5.1-bin.tar.gz -C /. On Linux, simply download from the Internet a gcc compiler for your version of Linux. To compile the source code, open a Terminal window, navigate to the Ellipsis source (use cd command and drag&drop), then execute the following three lines (you may have to specify the path to your com- piler by adding the option CC=/path_to/gcc-5.1 after CFLAGS=’-m32’ ): An executable called ellipsis3d will be created in the source directory. Copy this ex- ./configure CFLAGS='-m32' ecutable (its size is only a few 100s bytes) and paste it in a folder in which your input make clean make files will also be located (good practice is to keep a copy of the Ellipsis executable with each model, so you can keep track of which version of Ellipsis you have been
8 running). To run (i.e. execute) an input file, simply open a Terminal window, navigate to the folder holding ellipsis3d and the input file (here mycoolmodel.input) and enter the following command: ./ellipsis3d mycoolmodel.input
To stop the model, click the sequence control-C. More often than not, the reference model described in the input file requires access to a file laying out the temperature at each node of the computational grid. This file is called into the input file at previous_temperature_file=”mytempfile”. We will see later how this file can be generated. This temperature file must be present in the folder in which you are running the experiment, as shown on the figure on the right.
As Ellipsis runs, it dumps at each time step a bunch of files into the folder. These files, specified by the users in the input script, store grid and particles information (temperature, pressure, velocity, stress, strain rate, viscosity, ...). #.ppm0: These files are graphics ouput showing various lithologies, melt fraction, stress, viscosity etc. #.node_data: These files store parameters such as temperature and pressure at the node of the computational grid. The in- formation in node_data can be used for data mapping (mapping of isotherms, super-solidus temperature, melt fraction, ...). For this you will need to process the data stored in .node_data using Matlab, R, Python or any other appropriate tools. #.profiles: These files give access to profiles (horizontal or vertical) passing through "sampling tracers". In the input file one can specify the initial position of up to 50 sampling tracers (this position can remain fixed to the computational grid, or can move with the rocks through the grid), the direction of the profile (horizontal or vertical), and the type of data to be profiled (temperature, melt, depletion, pressure, velocity, strain rate, stress ... ) #.sample : These files are updated at each time step to add the new value of parameters attached to one of the sampling tracers. Hence, there is one .sample file per sampling tracer. These files record the evolution through time of data attached to a tracer (x, z, Data), where data can be temperature, pressure, etc. These can be used to collect data to plot PTt paths.
9 Resolution of the computational grid and particle density: Solving the Navier-Stokes equations is handled via a multigrid solver which uses a stack of increasingly finer computational grids. A rough solution of the velocity field is obtained rapidly on a coarser grid. This solution is then used as input on the next finer grid, and so on. In the in- put file, the number of levels refers to the number of grids in the multigrid stack (see fig. below). The number of grid cells is multiplied by 4 when moving from a grid level to the next. The larger the levels the better the resolution of the model, but the longer the running time. The pressure field is calculated using a sub-set of the multigrid stack (pmg_levels), the maximum depth of which is levels-1. The finer grid also contains Lagrangian tracers. They carry information about the material properties. The more tracers there are, the better defined are the density interfaces. There is no fast rule for how many grid levels and tracers one should use. Consequently, a model may take any time between a few minutes to several months… In most cases, levels=4 is sufficient when testing lithospheric-scale models. Once the model has been properly tested at levels=4 to ensure that geometry and boundary conditions are properly set, it is a good practice to let it run at levels≥4 (up to level=6).
Figure. A grid stack (Levels=3: grid 0, grid 1 and grid 2) for the multigrid solver. The number of cells in the finest grid is determined by the Levels (here 3) and the parameters mgunitx (number of cells along the x direction in grid 0, here 5) and mgunitz (number of cells along the z direction in grid 0, here 3). On the right, tracer_density is defined on the finest grid level. In this example tracer_density=4, meaning there are four tracers/elements both in the x and z direction (i.e. 4x4=16 trac- ers per element).
10 Initial temperature field: The temperature field associated with an input file is computed by running the input script without any kinematic of dynamic boundary conditions (i.e. the model is not tectonically extended or short- ened). For maximum computational efficiency one can increase all viscosities and turn plastic rheologies off (so the model doesn’t deform while the geotherm evolves), this is achieved using the following over-riding toggles (at the end of the input script): TDEPV=off visc_max=5e31 visc_min=1e31 YIELD=off
… and increasing the fixed_timestep to 1 million years, since we use second for unit of time ...: fixed_timestep=3.1536e13
Then, any #.node_data output can be used as a previous_temperature file by setting up in the Steady-States and Bound- ary Conditions section of the input file: previous_temperature_file="A_My_Previous_Temperature_File.node_data"
Time step: Unless when running thermal equilibration, time step should be kept below 6.3072e12 sec (that’s 200,000 years). If weird things happen (i.e. unexpected short-term fluctuation) try to reduce the time step.
11 Installing Ellipsis on Windows: In terms of programming platform, Windows can’t compete with Unix-based operating systems. One solution for Window users is to partition the hard drive and install Ubuntu (or another Linux version like Debian) on one of them, giving you access to both Windows and Linux environments. If this is too intimidating, then the next best thing is the Cygwin environment. Cygwin gives Windows users a Unix-like environment. Cygwin is a collection of the most common programming tools and compilers (including the bash shell) for Windows. 1. Install Cygwin (https://www.cygwin.com/). Download the 32-bit or 64-bit setup executable depending on which variant of Windows you are using. 2. From within Cygwin’s setup.exe program install the base (give access to bash), devel (give access to gcc compilers and make), util and web (gives access to wget tools) packages. Some advices on installing Cygwin: http://preshing.com/20141108/how-to-install-the-latest-gcc-on-windows/
There are number of alternative C compilers and development environment available for windows, MinGW is one of the most popular providing access to gcc (http://www.mingw.org). MinGW can be installed alongside Cygwin. By editing the Cygwin path, calls to gcc or g++ can point to the MinGW bin directory.
Then in Cygwin execute the following:
./configure CFLAGS='-m32' make clean make
12 Using Ellipsis on Earthbyte’s server via PC lab: In this solution users are completely dependent on the infrastructure provided. Once this course is finished you won’t be able to use Ellipsis. A great solution of teachers, but a not so great solution for students.