Technische Universiteit Delft Faculteit Elektrotechniek, Wiskunde en Informatica Faculteit Technische Natuurwetenschappen Delft Institute of Applied Mathematics

Dissipation in the Abelian Sandpile Model

Verslag ten behoeve van het Delft Institute of Applied Mathematics als onderdeel ter verkrijging

van de graad van

BACHELOR OF SCIENCE in Technische Wiskunde en Technische Natuurkunde

door

HENK JONGBLOED

Delft, Nederland Augustus 2016

Copyright c 2016 door Henk Jongbloed. Alle rechten voorbehouden.

BSc verslag TECHNISCHE WISKUNDE en TECHNISCHE NATUURKUNDE

“Dissipation in the Abelian Sandpile Model”

HENK JONGBLOED

Technische Universiteit Delft

Begeleiders

Prof. dr.ir. F.H.J. Redig Dr.ir. J.M. Thijssen

Overige commissieleden

Dr. T. Idema Dr. J.L.A. Dubbeldam Dr. J.A.M. de Groot

Augustus, 2016 Delft

Abstract

The Abelian Sandpile model was originally introduced by Bak, Tang and Wiesenfeld in 1987 as a paradigm for self-organized criticality. In this thesis, we study a variant of this model, both from the point of view of mathematics, as well as from the point of view of physics. The effect of dissipation and creation of mass is investigated. By linking the avalanche dynamics of the infinite-volume sandpile model to random walks, we derive some criteria on the amount of dissipation and creation of mass in order for the model to be critical or non-critical. As an example we prove that a finite amount of conservative sites on a totally dissipative lattice is not critical, and more generally, if the distance to a dissipative site is uniformly bounded from above, then the model is not critical. We apply also applied a renormalisation method to the model in order to deduce its critical exponents and to determine whether a constant bulk dissipation destroys critical behaviour. Numerical simulations and a statistical analysis are performed to estimate critical exponents. Finally, we give a short discussion on self-organized criticality.

Preface

Sitting here in the pittoresque Zillertal in Austria, the beauty of nature intrigues me. Moreover, the symphony of light, sound and movement has triggered existential sensations in many people in history. No wonder that the great Greek philosopher Aristotle has said ‘Wonder is the beginning of all philosophy’. The research documented in this report forms my Bachelor Project at the TU Delft for the pro- grammes Applied Mathematics and Applied Physics. When choosing a project, I wanted to study a subject contained a robust mathematical aspect as well as an interesting physical interpretation. Also, numerical simulation was a preference. These properties are all found in the subject of this research, which is the Abelian Sandpile Model. Proposed for the first time in 1987 by Bak, Tang and Wiesenfeld in a physical context, it served as an illustration of ‘self-organized criticality’. Since then, the model and related concepts have been studied both in the mathematics and physics literature. Although much of the model has been understood thus far, there remain open questions as to how complex structure can arise out of some elementary local interaction rules. This touches on the philosophical concept of emergence. These three disciplines combined -physics, mathematics and philosophy- make the Abelian Sandpile model a beautiful subject for me to study. Some personal notes are in place here. During this project, I had a lot of other activities, like three courses, extracurricular activities and preparations for next year, which will be a year on the board of a student association. All of this made it difficult to really focus on the project, especially since it is on my own. In fact, I have come to better know myself, my interests and my weaknesses in studying. Projects are different from courses in a lot of ways and require a great deal of independence. To double-degree bachelor students, the difference can be even greater due to the relative small number of projects we have had before and necessity of ‘efficient course studying’ in order to graduate in time. In a project, efficiency is a lot harder to achieve, at least in my case. Delay of thesis defence was one of the consequences of my behaviour. I do not know exactly how to say it, but the last year of my bachelor was a lot harder that expected. This project was a real challenge for me; I have learned a lot but also disappointed myself due to my bad skills in planning. These are all lessons I will take with me in the future. I would like to thank my supervisors Frank Redig and Jos Thijssen for the time and effort they offered to help me in my project. Making appointments always went smoothly and I have learned a lot from them. The mathematical background of this project I discussed with Frank, and many times during these meetings we ran into problems that I could not even begin to solve with my relative small experience in the field of probability. Luckily, Frank always found a way to tackle a specific problem. With Jos, I mostly discussed numerical simulation, renormalisation theory and general theory of critical phenomena. These meetings were always very informative and relaxed, for which I thank him greatly. I also thank dr. Johan Dubbeldam, dr. Timon Idema and dr. Joost de Groot for taking place in my thesis committee. Finally, I would like to thank my family, roommates and close friends for giving me advice and helping me. Henk Jongbloed, August 15th, 2016

Contents

1 Introduction 1

2 The classical Abelian Sandpile Model 3

3 Introduction to mixed dissipative/source systems 7

4 Markov Processes, and generators 11 4.1 Definitions ...... 11 4.2 Deriving the generator of Markov Processes ...... 13 4.3 The Feynman-Kac formula for countable state space Markov Processes ...... 16

5 Toppling numbers, Avalanches and Random Walks 18 5.1 Linking avalanche dynamics to random walks ...... 20 5.2 Towards infinite volume ...... 23 5.3 Estimating critical behaviour ...... 26 5.4 From a CRW to a DRW ...... 26

6 Mathematical results 29 6.1 Conditions on D to obtain non-criticality ...... 29 6.2 Adding sources ...... 32 6.3 Finitely many sources ...... 32

7 A Renormalisation Approach to the BTW model 35 7.1 Introduction: Critical phenomena and renormalisation ...... 35 7.2 General remarks ...... 36 7.3 Renormalisation equations ...... 37 7.4 Fixed points ...... 43 7.5 Critical exponents ...... 43 7.6 Introducing dissipation ...... 45

8 Numerical simulation 46 8.1 Simulating the ASM ...... 46

9 Critical avalanche data analysis 53 9.1 The BTW approach ...... 53 9.2 Likelihood estimation of τ ...... 55 9.3 Truncated power law MLE estimation ...... 58 9.4 Our recommendation ...... 60

10 Self-Organized Criticality 61

11 Conclusions, Discussion, Recommendations and personal notes 64

Appendices 68

A Project Description: Dissipation in the Abelian Sandpile Model 68

B Code 69

1 Introduction

It has been almost 30 years since , Chao Tang and Kurt Wiesenfeld (BTW) proposed the sandpile model as a paradigm of self-organized criticality (SOC) [1]. It serves as the simplest and best- studied example of a non-equilibrium system, driven at a slow steady rate by adding particles, with local threshold relaxation rules, which in its critical state shows power-law behaviour obtained without fine-tuning of any control parameters. BTW claimed that the concept of SOC is an explanation of many different physical phenomena: from the formation of the Earth’s crust to the dynamics of solar flares to the distribution of skyscrapers in the world’s biggest cities [2].

Figure 1: Per Bak, Chao Tang and Kurt Wiesenfeld.1

This immediately explains the relevance of studying ‘toy models’ such as the sandpile model. Through detailed understanding of these relatively easy models, we may be able to make predictions about certain phenomena in the real world. The model was originally defined as follows. In one dimension (d = 1), consider a connected subset of Z, which we will call Vn. Without loss of generality we can take Vn = [−n, n] ∩ Z, and thus 2n + 1 is the size of the system. We denote by x ∈ Vn a site, and we define a height function η on Vn. Choosing a site x at random, we consequently keep adding particles to the system, increasing local heights by one unit.

η(x) → η(x) + 1

Given the fixed threshold value 2, when a local height becomes greater or equal to 1, one particle tumbles, which is referred to as a toppling event.

η(x) → η(x) − 2 η(x ± 1) → η(n ± 1) + 1 if η(x) ≥ 2

Boundaries can be either absorbing or periodic. With absorbing boundaries, the one-dimensional BTW model naturally evolves to a minimally stable state, where all but one site have a height of 1. The toppling rules in higher dimensions are essentially the same as in one dimension. In most papers, a d sandpile on a simply connected subset of Z is defined as a height function η with a critical height of 2d determines the toppling condition. Upon toppling, a site distributes its height to its 2d nearest- neighbours. On boundary sites, particles are lost. Thereafter, if another site becomes unstable from a previous toppling, it topples, until the time that no unstable sites exist in the system anymore. At this point, a particle is again added to a random location in Vn. A series of connected toppling events is called an avalanche. However, the dynamics of higher-dimensional BTW models is fundamentally

1Sources: http://www.eoht.info/page/Per+Bak, https://en.wikipedia.org/wiki/Chao_Tang, https://www. physics.gatech.edu/user/kurt-wiesenfeld 1 INTRODUCTION different from the one-dimensional model. Rather than evolving to its minimally stable configuration, it naturally evolves to a critical state in which avalanches of all sizes, up to the size of the system itself, occur. In the years following the proposal of the model, many researchers began working on it. Numerical simulations have been performed by Manna [4], who numerically identified some parameters governing the critical state of the 2D model. Not long after the BTW paper, Dhar and others [3] began formalising the model in terms by introducing the concept of addition operators, their group structure and spanning trees. He therefore called the original BTW model the ‘Abelian Sandpile Model’ (ASM). Abelianness in the dynamics stems from the fact that first adding a particle at a site x, let the relaxation dynamics take place, and then add a particle at y and letting relax results in the same final configuration as first adding at y and thereafter at x. Later, Frank Redig considered the infinite-volume limit of the ASM as the limit of the finite model. Critical exponents in various dimensions have been computed by mean-field approximations as well as by partial rigorous computations [6] and renormalisation methods [20], but in most cases these derivations are highly non-trivial. Methods of statistical physics provide another way to analyse the BTW model. The theory of critical phenomena, such as phase transitions, has been increasing greatly since the 1930’s. Since then, numerous critical phenomena have been analysed. Like water at a , the behaviour of large ensembles of particles dramatically change when approaching a so-called ‘critical point’: charac- teristic time and length scales vanish and highly non-linear dynamics are observed. The scale-invariance that characterises the critical state of such systems gives rise to renormalisation theory. In this method, the dynamics at a generic scale are linked to the dynamics at another scale via the renormalisation transformation. Scale-invariance then allows calculation of critical exponents and formulation of scaling laws. In this thesis, we will look at the effect of dissipation and formation of mass in the Abelian Sandpile Model. We will study variations of the BTW model with sinks (dissipative vertices where mass disap- pears) and sources (vertices where mass is added). Analytically we will focus on the question above which level of dissipation criticality is lost. Physically we will apply a renormalisation approach to the infinite-volume BTW model in two dimensions, and numerically simulate the finite-volume BTW model with sinks and sources. This report is organised as follows. Firstly, we will introduce the Abelian Sandpile Model in a mathematical way. We then continue by considering micro-scale dynamics in the one-dimensional model and the effect of sources and sinks. Thereafter, we develop the general mathematical theory concerning Markov processes, their semigroups and generators. Via the famous Feynman-Kac equation d combined with Dhar’s formula, we will relate avalanche dynamics to a simple random walk on Z , derive a characterisation of criticality in the d-dimensional model and consequently derive conditions for criticality or non-criticality. The more physical part of this paper begins by introducing critical phenomena and related concepts. Thereafter, a renormalisation approach will be applied to the BTW model in which we try to incorporate dissipation. The concept of self-organised criticality and emergence is discussed. Thereafter, we present results from numerical simulations. It will come as no surprise if the reader has not fully understood the matter in this introduction. It mostly serves as a quick overview of the sandpile model and its history in various disciplines of science. We encourage the reader to enjoy what follows!

2 2 THE CLASSICAL ABELIAN SANDPILE MODEL

2 The classical Abelian Sandpile Model

This section provides a brief discussion of the general mathematical aspects of the classical Abelian Sandpile model, sometimes abbreviated as ASM. By ‘classical’, we mean that we introduce the model the same way as Bak, Tang and Wiesenfeld [1], which was later described mathematically by Dhar [3]. This section is based on chapter 3 of the paper by Redig [5]. We consider as a ‘sandpile base’ the d simply connected finite set Vn ⊆ Z . Many other lattices and sets exist where sandpile automata can be defined upon, but we will restrict us to this class of lattices. We define the sandpile basis by

d d Vn = [−n, n] ∩ Z

with n ∈ N fixed. As the toppling or system matrix, we take minus the lattice Laplacian: ∆x,x = 2d, ∆x,y = −1, for x, y ∈ Vn, |x − y| = 1, and ∆xy = 0 for x, y ∈ Vn, x 6= y, |x − y|= 6 1. A height configuration η is a map η : Vn → N, and the set of all height configurations is denoted by H. A height configuration η ∈ H is called stable if for every x ∈ Vn, η(x) < ∆xx. The set of all stable configurations is denoted by Ω = {η ∈ H : η(x) < ∆xx for all x ∈ Vn}. A site x ∈ Vn where η(x) ≥ ∆x,x is called an unstable site. The toppling of a site x ∈ Vn is now conveniently defined by

Tx(η)(y) = η(y) − ∆x,y (1)

This means that the site x will lose 2d grains and will distribute them to its nearest neighbours in Vn. Note that mass is conserved, except when boundary sites topple, i.e. sites with less than 2d neighbours in Vn. By convention, boundary sites lose 2d grains upon toppling and distribute these to nearest neighbours, if present, and other grains enter a global sink. The toppling of a site x is called legal if x is unstable, otherwise it is called illegal. Furthermore, if x, y ∈ Vn are both unstable sites of η, then

TxTy(η) = η − ∆x,· − ∆y,· = TyTx(η) (2)

This is the elementary abelian property of the Abelian Sandpile Model, which follows directly from V the commutativity of addition in Z n . From the elementary abelian property it follows that any finite sequence of the same legal topplings yields the same end result, independent of the order of toppling. This is very important in both mathematical and computational perspective. Also, the elementary abelian property motivates defining the following operator:

S (η) = Tx1 ...Txn (η) (3)

S : H → Ω is called the stabilization operator. In this definition, two additional requirements must be made: S (η) must be stable and for all i ∈ {1, .., n}, the toppling at site xi must be legal. For η ∈ H

and a sequence Tx1 ...Txm of legal topplings we define the toppling numbers of that particular sequence as m X nx = I(xi = x) (4) i=1 Now the configuration resulting from that sequence of topplings can be written as

Tx1 ...Txm (η) = η − ∆n (5) where n is the column indexed by x ∈ Vn with elements nx. This formula is very important in the mathematics of the ASM. It is proven in [5] that in the classical case of the finite-volume ASM, S is well-defined. This means that when η ∈ H is given, Tx1 ...Txn and Ty1 ...Tym are two sequences of legal topplings leading to a stable configuration, the resulting configuration will be the same. Furthermore, nx = mx for all x ∈ Vn. In the proof of the well-definedness of S , it it very important that the toppling numbers resulting

3 2 THE CLASSICAL ABELIAN SANDPILE MODEL

from the stabilizing process via legal topplings are finite. That is, for every η0 ∈ H there exists a finite N sequence of sites (xi)i=1 ∈ Vn such that

S (η0) = Tx1 ...TxN (η0) = η0 − ∆n is stable. This is a very important feature of the classical Abelian Sandpile Model. The well-definedness of the stabilization operator S immediately implies that the addition operator

axη = S (η + δx) (6) is well defined, and abelianness holds:

axayη = ayaxη = S (η + δx + δy), (7) which is why the ASM is called Abelian. The Abelian Sandpile as described by BTW consists of a sequence of stable configurations. At each time n ∈ N, a ‘particle’ is added at a certain x ∈ Vn. Thereafter, the configuration is stabilized. This process continues in time, resulting in a sequence of stable configurations evolving in time. More mathematically, let p = p(x) be a probability distribution on Vn, thus p(x) ≥ 0 for all x ∈ Vn and P p(x) = 1. Starting from an initial height configuration η ∈ H, the configuration at time n is x∈Vn 0 given by the random variable n Y ηn = aXi η n ∈ N (8) i=1 iid where X1, ..., Xn ∼ p. The process (8) is a Markov chain. Ω is the state space of the Markov chain, and the Markov transition operator defined on functions f :Ω → R is then given by X P f(η) = E(f(η1|η0 = η)) = p(x)f(axη) (9) x∈Vn Now, the configurations η ∈ Ω can be divided in two classes: recurrent and transient, as is always the case with Markov processes. However, we can prove that the set of transient configurations is non-empty. To see this, consider a one-dimensional finite system with Vn = {−2, −1, 0, 1, 2} and initial height function 1 1 0 0 1. We claim that this configuration is transient. Indeed, the two zeros will never come back: Adding a particle to any site in this configuration causes one of the two middle sites to gain a particle. The only way this site can have zero particles again is by toppling, but then its neighbouring site will have height one. This is also true in d > 1. Fortunately, we have a characterisation of configurations which are transient, and equivalently a characterisation of recurrent configurations.

DEFINITION 2.1. Let η ∈ H. For W ⊆ Vn, W 6= ∅, we call the pair (W, ηW ) a forbidden subconfigu- ration (FSC) if for all x ∈ W : X η(x) < (−∆xy) y∈W \{x}

If for η ∈ Ω there exists a FSC (W, ηW ), then we say that η contains a FSC. A configuration η ∈ Ω is called allowed if it does not contain forbidden subconfigurations. The set of stable allowed configurations is denoted by R0. Note that Let us denote the (unique) set of recurrent configurations by R. This set is unique because the Markov chain restricted to this set is irreducible: any element of R can be reached from every other element in R. If we denote by the set A the set of all finite products of addition operators on Ω, (A , ·) is an abelian . Furthermore, restricted to R, G ≡ (A , ·) is an . Some very useful theorems can be proven about the set of recurrent configurations in the ASM. A particularly interesting example of this is the following.

4 2 THE CLASSICAL ABELIAN SANDPILE MODEL

THEOREM 2.1. A stable configuration η ∈ Ω is recurrent if and only if it is allowed. So R = R0.

PROOF. The proof can be found in [5]. Theorem 2.1 is a rigorous statement of the phenomenon we have encountered before by deducing that 1 0 0 1 is a transient configuration. Indeed, (W, ηW ) = ({1, 2}, {0, 0}) is a FSC. Therefore 1 0 0 1 is a transient configuration. A bijection can be made between rooted spanning trees and recurrent configurations, as has been illustrated by Dhar [3] using Kirchhoff’s matrix tree theorem and Dhar’s burning algorithm. Thereby, we arrive at the identity |R| = det (∆) (10) Because of the group structure of addition operators under composition, Dhar shows that G is iso- morphic to the group of equivalence classes of configurations gotten by reducing modulo the toppling operation, which can be written as V V G ' Z n /∆Z n (11) where ∆ again denotes the lattice Laplacian. It is known that an irreducible Markov chain has a stationary distribution if and only if all of its states are recurrent. Restricting ηn to R, the Markov chain is irreducible. Therefore a stationary distribution on R exists. Since there is strictly positive probability that a recurrent configuration will be reached in the process ηn, n ∈ N, the Markov chain will eventually reach that class and remain in R forever. Now, denote the stationary distribution on R by µ(η).

THEOREM 2.2. The process ηn, n ∈ N with transition operator X P f(η) = p(x)f(axη) (12) x has a stationary distribution 1 X µ(η) = µ = δη (13) |R| η∈R i.e. it is simply uniform on R.

PROOF. X 1 X X P f(η)µ(η) = f(axη) |R| η∈R η∈R x∈Vn 1 X X = p(x) f(axη) |R| x∈Vn η∈R 1 X X 0 = p(x) f(η ), since ax : R → R is a bijection |R| 0 x∈Vn η ∈R ! X 1 X = p(x) f(η0) |R| 0 x∈Vn η ∈R 1 X = f(η0) |R| η0∈R X = f(η0)µ(η) η0∈R

Previous considerations were all made for finite system size Vn. An interesting question is to look at the

5 2 THE CLASSICAL ABELIAN SANDPILE MODEL

d infinite volume limit. What happens if we let Vn ↑ Z by letting n → ∞? Bak, Tang and Wiesenfeld [1] define criticality by the emergence of power-law behaviour in avalanche sizes. Redig [5] takes another perspective which has to do with expected avalanche size in the large volume limit. We turn to the large volume limit later in this report.

6 3 INTRODUCTION TO MIXED DISSIPATIVE/SOURCE SYSTEMS

3 Introduction to mixed dissipative/source systems

In the previous section, we have seen that the ‘classical’ finite-volume Abelian Sandpile model is called ‘abelian’ for a reason. The elementary abelian property of toppling TxTy = TyTx, together with the well-definedness of the stabilization operator S guarantees the abelian property of addition operators: axay = ayax. This property is very important for the mathematics and simulation of the Abelian Sandpile model. Regardless of the order of legal topplings, the resulting stable configuration S (η) ∈ Ω is always the same, starting from some η ∈ H. Let us now define a general finite-volume toppling d matrix for systems in which sources and sinks are present. Again, Vn = [−n, n] ∩ Z is our finite lattice and we define by Dn ⊂ Vn the set of dissipative sites and Fn ⊂ Vn the set of source sites. When d viewing D as the set of dissipative sites in infinite volume and F as the source set in Z , we have Cn = C ∩ Vn, Dn = D ∩ Vn and Fn = F ∩ Vn. By denoting Cn the set of normal, conservative sites, we have Vn = Cn ∪ Dn ∪ Fn as a union of pairwise disjoint sets.

Dn,Fn d DEFINITION 3.1. The finite-volume toppling matrix ∆x,y for x, y ∈ Vn = [−n, n] ∩ Z is defined as  −1 for x, y ∈ V , |x − y| = 1  n  2d + 1 for x = y ∈ Dn ∆Dn,Fn = (14) x,y 2d for x = y ∈ C  n  2d − 1 for x = y ∈ Fn

Upon toppling at a site x ∈ Dn, which happens at a height 2d + 1, the site distributes 2d grains to its nearest neighbours, and looses one grain to an invisible sink. Conversely, upon toppling at a site x ∈ Fn, which happens at a height 2d − 1, the site distributes 2d grains to its nearest neighbours, and creates one grain ex nihilo. In this section, we focus on mixed sink/source systems, with normal, dissipative and source sites. We begin by observing some behaviour in one dimension.

Stabilizability We aim to provide some conjectures about finite-system configurations that are al- ways stabilizable in a finite time. That is, via a finite sequence of topplings, one arrives at a unique stable final configuration. Can we characterize a class of subsystems that are always stabilizable? Firstly, the definition of stabilizability, metastabilizability and unstabilizability is given below. Note that a system is fully characterized by the sets Cn,Dn,Fn.

DEFINITION 3.2. Characterization of systems

Given an initial configuration η0 ∈ H and a legal toppling sequence (xn)n∈N, a system is called

N 1. stabilizable, if for all unstable η0 ∈ H there exists a finite legal toppling sequence (xn)n=1(η0) such

that ξ = ηN = TxN TxN−1 ...Tx1 (η0) satisfies ξ(x) < ∆xx for all x ∈ Vn.

∞ 2. metastabilizable, if there exists η0 ∈ H such that for every legal toppling sequence (xn)n=1(η0) and

for all N ∈ N, ηN = TxN TxN−1 ...Tx1 (η0) is unstable, and there exists M ∈ N such that the total P mass is bounded: x ηN (x) < M.

∞ 3. unstabilizable, if there exists η0 ∈ H such that for every legal toppling sequence (xn)n=1(η0) and P for all N ∈ N, ηN = TxN TxN−1 ...Tx1 (η0) is unstable, and the total mass x η(x) diverges. For example, if a system only contains only normal and dissipative sites, it is always stabilizable. Mass cannot build up in the interior of Vn if no sources are present. At every toppling of a dissipative site, up to one grain is lost, so the total mass of the system P η(x) is a decreasing sequence. x∈Vn Eventually, it will be stable. Some examples are given below. The bold-faced numbers indicate the sequence of decreasing heights on the source site, indicating stabilizability. This is all in d = 1, and

7 3 INTRODUCTION TO MIXED DISSIPATIVE/SOURCE SYSTEMS because small system sizes are considered, we can write it out by hand. Now, we denote a dissipative site by δ, a source site by σ and a normal site by ν.

Table 1: A stabilizable δσδ system with initial configuration 0n0.

δ σ δ Sum 0 n 0 n 1 n-1 1 n+1 2 n-2 2 n+2 3 n-3 3 n+3 0 n-1 0 n-1

It is clear that this small configuration is stabilizable, since it is losing mass at a steady rate and will eventually reach a stable configuration.

Table 2: A stabilizable νδσδν system with initial configuration 00n00.

ν δ σ δ ν Sum 0 0 n 0 0 n 0 1 n-1 1 0 n+1 0 2 n-2 2 0 n+2 0 3 n-3 3 0 n+3 1 1 n-2 1 1 n+2 1 2 n-3 2 1 n+3 1 3 n-4 3 1 n+4 2 1 n-3 1 2 n+3 0 3 n-4 3 0 n+2 1 1 n-3 1 1 n+1

The stabilizability of this configuration can clearly be seen.

Table 3: A metastabilizable νσν system with initial configuration 0n0. This time, sites are toppled conveniently to demonstrate the periodicity of configurations, which indicate the metastabilizability of this system.

ν σ ν Sum 0 n 0 n 1 n-1 1 n+1 2 n-2 2 n+2 0 n 0 n

8 3 INTRODUCTION TO MIXED DISSIPATIVE/SOURCE SYSTEMS

Table 4: An unstabilizable δσσδ system with initial configuration 0010. Mass builds up in the system because two sources next to each other in d = 1 are never stabilizable.

δ σ σ δ Sum 0 0 1 0 1 0 1 0 1 2 1 0 1 1 3 1 1 0 2 4 2 0 1 2 5 2 1 0 3 6 3 0 2 0 5

Among other observations, we see that νσν is periodic in its configurations. After a finite number of legal topplings, the same configuration is reached. From this, we can conjecture that a subsystem νσν embedded in some larger system is never stabilizable when given a configuration of 0n0, with n ≤ 2, or n0m, with n + m ≤ 3. That is, a source site has to be next to at least one dissipative site in order for the configuration to be stabilizable in general. From this, one can also deduce that the system δσν is stabilizable. But, as we shall see, the embedding of δσν in a larger system is rather complicated. We can therefore formulate some conjectures about the behaviour of one-dimensional mixed sink/source systems as follows.

• A system containing subsystem σσ is unstabilizable.

• A system consisting of configurations δσδ and further only ν (and δ) is always stabilizable.

• A system with source boundaries, and furthermore only ν or δσδ configurations, conserves mass and is therefore metastabilizable.

• Generally, a system containing the subsystem νσν is unstabilizable.

• A system consisting of only ν and δ sites is always stabilizable.

• When embedded in a finite lattice of dissipative sites, the subsystem σν is sometimes stabilizable. The stabilizing behaviour highly depends on the distance of the subsystem σν to the boundary of the lattice. When multiple σν subsystems are present, the behaviour also depends on the distance between them. This behaviour is rather chaotic.

In fact, we can summarize these statements in the following conjecture, which is rather surprising.

CONJECTURE 3.1. Characterizations of stabilizability For a general system consisting of normal, dissipative and source sites, with associated system matrix ∆Dn,Fn , the following statements are true. A system is

1. stabilizable ⇔ ∀λ ∈ λ(∆Dn,Fn ): λ > 0

2. metastabilizable ⇔ ∃λ ∈ λ(∆Dn,Fn ): λ = 0

3. unstabilizable ⇔ ∃λ ∈ λ(∆Dn,Fn ): λ < 0

Motivation. This conjecture was obtained while proving Dhar’s formula, documented in following sections. When simulating different systems, we started to see a one-to-one correspondence between the inverse toppling matrix and the behaviour of the stabilizing process. This was in the one-dimensional case. For example, consider the one-dimensional system σννν...νννσ. We know that this system is metastabilizable: when all sites topple at once, no mass is lost. We then see that det (∆Dn,Fn ) = 0.

9 3 INTRODUCTION TO MIXED DISSIPATIVE/SOURCE SYSTEMS

Also the system ννν...σσ...ννν is unstabilizable. We then see that det (∆Dn,Fn ) < 0. However, to link the determinant of ∆Dn,Fn directly to the stabilizability of a system with toppling matrix ∆Dn,Fn may be too pretentious. We therefore state this weaker conjecture, knowing that the determinant of a matrix is the product of its eigenvalues.

Meta- and unstabilizability It turns out that we cannot conclude much about meta- or unstabi- lizable systems. The notion of extended abelianness has been disproved. By extended abelianness we mean that a metastabilizable system shows a finite set of configurations in the stabilizing process where only a finite number of ‘recurrent’ unstable configurations are encountered. Regardless of the order of topplings, the same configurations would emerge and one could define a measure on these recurrent unstable configurations. However, this is not the case, as basic numerical simulations show. A first example shows that the system order is very important. Consider a system δσσδδ with initial height function 0010. This configuration is not stabilizable, even though the total mass of 1 is less than the mass of the minimally stable configuration 20022 which has total mass 6. A system consisting of the same vertices but in different order, say δσδσδ with initial configuration 01010 is stable in one step: when using the ‘left first’ toppling rule, we have 01010 → 10110 → 10201. Via the ‘select most unstable vertex’ rule, we have the same result. This makes it very difficult, in general, to make predictions about the behaviour of large-size mixed sink-source systems, since the behaviour depends largely on the micro-structure of the system. As an example of the failed generalized abelian property conjecture, consider the metastabilizable dis- tribution νσσσν with initial configuration 22222. It depends on the toppling algorithm which set of unstable configurations is seen. At this point, we therefore have to restrict ourselves to stabilizable systems.

10 4 MARKOV PROCESSES, SEMIGROUPS AND GENERATORS

4 Markov Processes, semigroups and generators

In this section, we will develop the theory to support our mathematical investigation of the Abelian Sandpile Model considering dissipation and anti-dissipation. First, some definitions are needed. We will begin by defining stochastic processes and in particular Markov processes. Associated to Markov processes are semigroups and generators. In various textbooks, such as [8], these definitions are very precise. In order to analyze the theoretical framework of stochastic processes, this is necessary. We choose to take a slightly less robust way of defining and reasoning. The most important thing to remember is that under certain conditions, there is a natural one-to-one correspondence between a Markov process, its semigroup and it generator. This introduction is largely based on [9].

4.1 Definitions We begin by defining Markov processes, their associated semigroups and corresponding generators. A Markov process is a specific example of a stochastic process.

DEFINITION 4.1. Stochastic Process Given a probability space (X , F ,P ) and a measurable space (Ω, Σ), a stochastic process is a collection of Ω-valued random variables indexed by a totally ordered set T :

{Xt : t ∈ T }

The space Ω is referred to as the state space of the process.

Stochastic processes come in a wide variety of shapes and forms: among other things, they are extensively used in physics, economics and biology. We will focus on a class of stochastic processes with a special property, called the Markov property, named after the famous Russian mathematician Andrey Markov (1856-1922). Stochastic processes with the Markov property are called Markov processes. From now on, we take T = [0, ∞). Furthermore, we assume that Ω is countable.

DEFINITION 4.2. Markov Property A stochastic process {Xt : t ≥ 0} on a state space Ω is called to have the Markov property if for all t > s ∈ [0, ∞) and x ∈ Ω the equality

E (f(Xt+s)|Fs) = E [f(Xt+s)|Xs]

where Fs ≡ σ{Xr : 0 ≤ r ≤ s}, holds. Alternatively, we can formulate the Markov property as follows: A stochastic process {Xt : t ≥ 0} is called a Markov process if for all t > 0, and for all 0 < t1 <, ..., < tn < t, n ∈ N

E[f(Xt)|Xt1 , ..., Xtn ] = E[f(Xt)|Xtn ] for all measurable and bounded f :Ω → R. One can see this property as a form of loss of memory. Conditioning on the entire past of the process until a time s is equivalent to conditioning only on the value of the process at time s. Also, given the state of the process at time s, the past of the process and future of the process are independent. An family of operators naturally associated to a Markov process {Xt, t ≥ 0} is the so-called semigroup {St, t ≥ 0}.

DEFINITION 4.3. Semigroup For measurable and bounded f ∈ Cb(Ω), the semigroup associated to a Markov process {Xt, t ≥ 0} with state space Ω is defined as

Stf(x) = E[f(Xt)|X0 = x]

11 4 MARKOV PROCESSES, SEMIGROUPS AND GENERATORS

We will also use the notation Stf(x) = E[f(Xt)|X0 = x] = Ex[f(Xt)]. St has a number of useful properties, justifying the name ‘semigroup’. These are, among others for f ∈ Cb(Ω):

1. S0f = f

2. St1 = 1

3. f ≥ 0 implies Stf ≥ 0

4. for s, t > 0, we have the semigroup property: StSs = St+s

5. St is a contraction semigroup: ||Stf||∞ ≤ ||f||∞. Properties 1-3 are trivial to see. Property 4 follows from the observations

St+sf(x) = E[f(Xt+s)|X0 = x] = Ex[f(Xt+s)] = Ex[E[f(Xt+s)|Xs]]

= Ex[EXs [f(Xt)]], by the Markov property ofXt = Ex[(Stf)(Xs)] = (Ss(Stf))(x) = (SsStf)(x)

This expression is symmetric in t and s, from which it follows that St+s = StSs = SsSt. We are now ready to define the generator of a Markov process {Xt, t ≥ 0}.

DEFINITION 4.4. Generator Let St be the semigroup of a Markov process {Xt, t ≥ 0}. For suitably measurable f :Ω → R, we define the generator L by S f − f Lf = lim t (15) t↓0 t for functions f for which this limit exists.

One can interpret the generator of the semigroup St as the ‘derivative’ of St at t = 0, or equivalently as the expected behaviour of the process after a very short time. Indeed, for f ∈ Cb(Ω) we have

Ex(f(Xt)) = f(x) + tLf(x) + o(t), as t ↓ 0.

The Markov process {Xt, t ≥ 0}, the semigroup operator St and the generator L are closely and exclusively related. In the case of finite state space Ω, we can write the semigroup as a matrix where

(St)x,y = P (Xt = y|X0 = x) (16) Also, when L is a bounded operator, the semigroup property allows the semigroup with generator L to be written as ∞ X tnLn S = etL = (17) t n! n=0

The boundedness of L ensures that St is well-defined in this case. Indeed, we have

∞ n ∞ n n ∞ n n X tnL X t ||L || X t ||L|| k k ≤ ≤ = et||L|| < ∞. n! n! n! n=0 n=0 n=0 In the case of a general generator L, one cannot simply write the semigroup as in equation (17), because this expression is not guaranteed to have meaning in case of unbounded L. However, under certain conditions, a natural connection can be made between a Markov generator L and its semigroup {St : t ≥ 0}, a result that is known as the Hille-Yosida theorem. For these conditions, we refer the reader to for example [8].

12 4 MARKOV PROCESSES, SEMIGROUPS AND GENERATORS

THEOREM 4.1. Hille-Yosida The one-to-one connection between a Markov generator and a contrac- tion semigroup is given by S f − f Lf = lim t (18) t→0 t and vice versa  t −n St = lim I − L (19) n→∞ n

These definitions, relations and theorems can be viewed in a much wider sense than we have done here. For a profound discussion, see for example [8]. We are almost finished introducing general definitions and properties associated to Markov processes. It is useful to define the notion of an invariant measure.

DEFINITION 4.5. Invariant measure Given a family (St)t≥0 of Markov semigroup operators on a measurable space (Ω, Σ) as before, a (pos- itive) σ-finite measure µ on (Ω, Σ) is said to be invariant for (St)t≥0 if for every bounded measurable function f :Ω → R and every t ≥ 0: Z Z Stfdµ = fdµ (20) Ω Ω

In general, the invariant measure is only defined up to a multiplicative constant. When it is finite, it is a natural choice to normalize it by a positive constant to construct a probability measure. Then, it has a clear probabilistic meaning for the associated Markov process {Xt, t ≥ 0}. If the process starts at time t = 0 from X0 with initial distribution µ, then it keeps this distribution for all t, since by the law of total expectation and the Markov property it is true that for any bounded and measurable function f :Ω → R:

E [f(Xt)] = E [E [f(Xt)|X0]] = E [Stf(X0)] Z = Stfdµ Ω Z = fdµ Ω = E [f(X0)]

It can be proven rather easily that the following holds: R µ is an invariant measure for a family of Markov semigroups (St)t≥0 ⇔ Ω Lfdµ = 0 for all f :Ω → R bounded and measurable.

4.2 Deriving the generator of Markov Processes

Poisson process Let us take a short break and compute the Markov generator of a Poisson process. This is a process in space Ω = N, where at random times the process ‘jumps one step up’. The process starts from some x ∈ N, so that X0 = x. After a time T1 ∼ exp (λ) has elapsed, the process jumps one step up by an fixed amount. This is repeated in time, where the elapsed times T1, ..., Tn are exponentially distributed with rate λ. The memorylessness of the exponential distribution is important. In fact, this memoryless property guarantees the Markov property of the process, and thereby the existence of its semigroup and generator.

13 4 MARKOV PROCESSES, SEMIGROUPS AND GENERATORS

Poisson Processes with various rates 50

45 λ =2 λ =4 λ =8 40

35

30

t 25 X

20

15

10

5

0 0 5 10 15 20 25 30 35 t

Figure 2: A realization of a Poisson process with rates 1, 4 and 8.

For small enough t, we may only keep order t terms. Let Nt denote the random variable of the num- ber of jumps of the process starting at time t = 0. Because waiting times are distributed exponentially, we can calculate

Stf(x) = Ex[f(x + Nt)] (21) 2 = f(x)P (Nt = 0) + f(x + 1)P (Nt = 1) + O(t ) for small t (22) = e−tλf(x) + (1 − e−tλ)f(x + 1) + O(t2) (23) It follows from l’Hopital’s rule that e−tλf(x) + (1 − e−tλ)f(x + 1) − f(x) Lf(x) = lim + O(t) t→0 t = λ (f(x + 1) − f(x)) This is the generator for a Poisson process with rate λ.

LEMMA 4.1. Distribution of Nt When defining a continuous Markov process {Xt, t ≥ 0} with fixed rate λ on a state space Ω, we postulate that the transition times T1,T2, ... are i.i.d. exponentially −1 distributed with parameter λ, such that E[Ti] = λ for all i ∈ N. From this, it is easy to see that the random variable Nt representing the number of transitions in [0, t) is distributed as Nt ∼ P ois(λt).

PROOF. Let T1,T2, ... ∼ exp (λ) be independent identically distributed random variables. It is known Pn that Mn ≡ i=1 Ti ∼ Gamma(n, λ). Now, the probability that exactly n transitions of the Markov process occur in the interval [0, t) is equal to the probability that Mn ≤ t and Mn+1 > t. Thus

P (Nt = n) = FMn (t) − FMn+1 (t) (24) where FMn = P (Mn ≤ t). Since Mn is Gamma distributed with parameters n and λ, we have n Z t n+1 Z t λ n−1 −λs λ n −λs P (Nt = n) = s e ds − s e ds (25) Γ(n) 0 Γ(n + 1) 0 By partial integration, we have the identity Z t 1 h i Z t sn−1e−λsds = tne−λt + λ sne−λsds 0 n 0 Substituting this expression back in (25), many terms cancel and eventually (λt)n P (N = n) = e−λt (26) t n! which is the Poisson distribution function with parameter λt.

14 4 MARKOV PROCESSES, SEMIGROUPS AND GENERATORS

The generator of a discrete Markov jump process Lemma 4.2 states a very important property of continuous-time Markov processes, something which we have used a number of times earlier in this section.

LEMMA 4.2. In order to have the Markov property, a continuous-time stochastic process {Xt, t ≥ 0} has to make transitions from state to state at exponentially distributed times.

PROOF. Let {Xt, t ≥ 0} be a Markov process on a state space Ω. Furthermore, let X0 = x. Let T denote the random variable representing the first time of transition from x ∈ Ω to another state. Then

P (T > t + s|T > s) = P (Xr = x ∀r ∈ [0, t + s]|Xr = x ∀r ∈ [0, s])

= P (Xr = x ∀r ∈ [0, t + s]|Xs = x), , by the Markov property ofXt

= P (Xr = x ∀r ∈ [0, t]), , by the Markov property ofXt = P (T > t)

The only memoryless continuous distribution functions are the family of exponential distributions, so memorylessness completely characterises the exponential distribution of T . The rate of T can be state- dependent. In a more general setting therefore, for x, y ∈ Ω, we define the rate function c :Ω × Ω → R. The numbers c(x, y) are taken as the rates to get from x to y, and can be viewed as an ‘probability per unit time’. Accordingly, the transition probability to reach a state x to state y is given by

c(x, y) X Π(x, y) ≡ , where Λx ≡ c(x, y) (27) Λx y6=x

A natural parameter for this is Λx, which obviously depends on x. This way, if Λx is small, because exit rates from x to any y ∈ Ω are small, the expected resting time of the process in x which is given −1 by Λx is large. Let us now proceed in deriving a general expression for the generator of a Markov process {Xt, t ≥ 0}. To obtain it, we expand Stf(x) to first order in t, just as in the derivation of the generator of the Poisson process. Again, we let Nt denote the stochastic variable of a the number of transitions of the process. So, for small t:

2 Stf(x) = Ex[f(Xt)] = Ex[f(Xt) · I(Nt ∈ {0, 1})] + O(t ) X 2 = P (Nt = 0)f(x) + P (Nt = 1) Π(x, y)f(y) + O(t ) y∈Ω X = e−Λxtf(x) + (1 − e−Λxt) Π(x, y)f(y) + O(t2) y∈Ω

Let me explain what is happening here. If Nt = 0, the process has stayed in x for a time t ≤ T1, where T1 ∼ exp (Λx). This happens with probability P (Nt = 0) = P (T1 ≥ t) = 1 − P (T1 ≤ t) = 1 − (1 − exp (−Λxt)) = exp (−Λxt). If Nt = 1, this means that the process has taken one step. This happens with probability P (Nt = 1) = Λxt exp (−Λxt). Furthermore, in one step the process can enter state y ∈ Ω with probability Π(x, y). We continue by expanding exp (−Λxt): for small t we have 2 exp (−Λxt) = 1 − Λxt + O(t ). We obtain

X 2 Stf(x) = f(x) − Λxf(x)t + Λxt Π(x, y)f(y) + O(t ) y∈Ω X X 2 X = f(x) − tΛx Π(x, y)f(x) + Λxt Π(x, y)f(y) + O(t ), because Π(x, y) = 1 y∈Ω y∈Ω y X = f(x) + t c(x, y)(f(y) − f(x)) + O(t2). y∈Ω

15 4 MARKOV PROCESSES, SEMIGROUPS AND GENERATORS

The last equation follows because c(x, y) = ΛxΠ(x, y). Consequently, for the Markov generator of this general Markov process L we can immediately conclude X Lf(x) = c(x, y)(f(y) − f(x)). (28) y∈Ω This is the general expression for a generator for discrete Markov jump processes. In matrix form (finite state space), L contains the transition rates on its off-diagonals and its rows sum up to zero.

4.3 The Feynman-Kac formula for countable state space Markov Processes The Feynman-Kac formula is very important when solving certain kinds of partial differential equa- tions. Named after the prominent American theoretical physicist Richard Feynman (1918-1988) and the influential Polish American mathematician Mark Kac (1914-1984) who published the formula in 1949, it has been widely used in the analysis of stochastic partial differential equations, most notably partial differential equations, such as diffusion equations.

Figure 3: Richard Feynman and Mark Kac.2

It is often the case that one has to look at differential operators of the form

Lf + V f (29) where L is the generator of a Markov semigroup and V f is multiplication by some non-constant potential V . This happens in particular when solving Schr¨odinger-type equations. In the case of the sandpile model, the reader may guess that V has something to do with sinks and sources on top of the original model. Applied to our case, we have the following version of the Feynman-Kac formula. Its importance shall soon be clear.

THEOREM 4.2. Feynman-Kac formula Let {Xt, t ≥ 0} be a Markov process with generator L on a countable state space Ω, and let V :Ω → R be a bounded function. Define the diagonal ‘matrix’ Vxy = V (x)δxy. Then L+V is an infinitesimal generator and the semigroup Σt generated by A satisfies

h R t i t(L+V ) V (Xs)ds Σtf(x) = e f(x) = Ex f(Xt)e 0 (30) This is a version of the Feynman-Kac formula.

2Sources: http://www.inspiremeyouth.com/richardfeynman/, https://en.wikipedia.org/wiki/Mark_Kac

16 4 MARKOV PROCESSES, SEMIGROUPS AND GENERATORS

PROOF. Expanding the right hand side of the Feynman-Kac formula to first order in t gives for the R t integral in the exponent 0 V (Xs)ds ≈ tV (x), and thus

h R t i V (Xs)ds Σtf(x) = Ex f(Xt)e 0

h tV (x)+O(t2)i = Ex f(Xt)e

h tV (x)i 2 = Ex f(Xt)e + O(t ) 2 = Ex [f(Xt)] + tV (x)Ex [f(Xt)] + O(t )

Because L is the generator of {Xt, t ≥ 0}, for small t it holds that tL + f = Ex[f(Xt)]. Therefore, we continue by saying

h R t i V (Xs)ds 2 Ex f(Xt)e 0 = tV (x)f(x) + f(x) + tLf(x) + O(t ). (31)

Subtracting f(x) from this equation, dividing by t and letting t ↓ 0 we obtain our result

 h R t i  Σtf(x) − f(x) 1 V (Xs)ds lim = lim Ex f(Xt)e 0 − f(x) t↓0 t t↓0 t 1 2  = lim tV (x)f(x) + f(x) + tLf(x) − f(x)O(t ) t↓0 t = (L + V )f(x)

W conclude that L + V is the generator of the semigroup Σt. To prove that Σt has the semigroup property, we show that for t, s > 0, Σt+s = ΣtΣs.

h R t+s i V (Xr)dr Σt+sf(x) = Ex f(Xt+s)e 0 h h R t+s ii V (Xr)dr = Ex E f(Xt+s)e 0 |Fs , where Fs ≡ σ{Xr, 0 ≤ r ≤ s} h R s h R t+s ii V (Xr)dr V (Xr)dr = Ex e 0 E f(Xt+s)e s |Fs h R s h R t ii 0 V (Xr)dr 0 V (Xr)dr = Ex e EXs f(Xt)e , by the Markov property h R s i V (Xr)dr = Ex e 0 (Σtf)(Xs)

= Σs(Σtf)(x) as required. Because every semigroup has a unique generator and vice versa, we conclude that L + V is the generator of the semigroup Σt. We are now ready to apply the theory we have seen in this section thus far to the Abelian Sandpile Model.

17 5 TOPPLING NUMBERS, AVALANCHES AND RANDOM WALKS

5 Toppling numbers, Avalanches and Random Walks

To analyse the Abelian Sandpile Model in terms of stochastic processes we have to define the state space, semigroup and generator of the model. One fundamental identity is called Dhar’s Formula, which is proven in [3]. It expresses the expected number of topplings of a site y ∈ Vn upon addition of a Dn,Fn particle at x ∈ Vn in a recurrent configuration η ∈ R entirely in terms of ∆ . In this section, we will make this formula precise. Furthermore, we will relate the ASM to the more general theory of Markov Processes, whereby the Feynman-Kac formula will help us greatly. Notice that we are still working d with finite system size Vn = [−n, n] ∩ Z . The following notions will bring us further in analysing the ASM with the help of the Feynman-Kac formula and Dhar’s formula.

PROPOSITION 5.1. When Vn is finite and a system is stabilizable in the sense of Definition 3.2, there will be a finite set of recurrent configurations R ⊆ Ω. The number of configurations in the recurrence class is given by |R| = det (∆Dn,Fn )

PROOF. This is proven in [5], using the burning algorithm and the matrix tree theorem. Redig proves the theorem for the case V = C, but since it is done in full generality with respect to the toppling matrix ∆Dn,Fn , the proof also applies here, in the case of stabilizable systems.

DEFINITION 5.1. Toppling numbers function

For a legal sequence of topplings Tx1 Tx2 ... resulting after addition of a particle at x ∈ Vn (η → η + δx) Q such that the resulting configuration i Txi (η + δ) is stable, the toppling numbers function n : Vn × Vn × H → Z is defined as X n(x, y, η) = I(xi = y) i

In other words, n(x, y, η) is the number of topplings of a site y ∈ Vn necessary to stabilize η + δx.

PROPOSITION 5.2. Finiteness of toppling numbers If a system is stabilizable, the addition operator ax :Ω → Ω is well-defined. That is, it can be written as a composition of finitely many toppling operators and the resulting configuration axη is unique. We can therefore write Dn,Fn axη = S (η + δx) = η + δx − ∆ n(x, ·, η) where n(·, x, η) denotes the vector of toppling numbers resulting from the stabilizing of η + δx.

PROOF. This is proven in by Redig [5], in the classical case. It is entirely analogous to the general case. Two important properties used in the proof remain the same in the case of general systems: firstly, stabilizability guarantees the finiteness of toppling numbers and the ‘off-diagonal’ elements are still negative. Since the addition operator is well-defined for stabilizable systems and group properties are preserved, we have again an invariant measure on the Markov chain of recurrent configurations. This fact is exploited in Dhar’s Formula.

THEOREM 5.1. Dhar’s Formula For a stabilizable system and for η ∈ R,

Dn,Fn −1 Eµ[n(x, y, η)] = ∆ x,y ≡ G(x, y) (32)

G(x, y) is called the Green’s function.

18 5 TOPPLING NUMBERS, AVALANCHES AND RANDOM WALKS

PROOF. Since η ∈ R and R is closed under the operators ax, it follows that axη ∈ R. Restricting ourselves to a site y ∈ Vn, we obtain the well-known fundamental relation

Dn,Fn (axη)(y) = η(y) + δx,y − ∆ n(x, y, η) (33) where n(x, y, η) this time denotes the ‘column vector’ indexed by y ∈ Vn (we hold x fixed). Integration of this equation with respect to the stationary measure on recurrent configurations will get us further. The stationary measure µ is uniform on the recurrent configurations, and the set of recurrent configurations R R R is closed under the addition operators {ax : x ∈ Vn}. It follows that (axη)(y)dµ = η(y)dµ. Consequently, integrating equation (33) with respect to the uniform measure on R, we have Z Z Z Z Dn,Fn (axη)(y)dµ = η(y)dµ + δx,ydµ − ∆ n(x, y, η)dµ (34) and by the above remark and linearity of the integral, we deduce that Z Dn,Fn ∆ n(z, x, η)dµ = δx,y (35)

R From this, we can deduce the Green’s function. Because n(x, y, η)dµ = Eµ[n(x, y, η)], multiplying Dn,Fn −1 Dn,Fn −1 Dn,Fn −1 by ∆ gives (provided that ∆ exists and ∆ xy ≥ 0 for all x, y ∈ Vn, which is guaranteed by stabilizability of the system),

Dn,Fn −1 G(x, y) = Eµ[n(x, y, η)] = ∆ xy (36)

Dhar’s formula plays a very important role in the study of finite-size Abelian Sandpile models. In the next chapters, the behaviour of different systems is linked to random walks. The Green’s function has an interpretation in terms of random walks: up to a multiplicative constant it is equal to the expected number of visits of a site y starting at a site x and killed upon leaving Vn.

The generator of the ASM In the generalized ASM containing normal, dissipative and source sites, the state space of the model is H = Vn × N ∪ {0} and an element in the process is a height configuration η : Vn → N ∪ {0}. Restricted to the stabilizable systems where the addition of a particle at u ∈ Vn in a configuration η produces a unique stable configuration auη, we have the generator of the ASM restricted to stabilizable systems:

∞ X L f = [f(auη) − f(η)] (37)

u∈Vn

This generator is derived by observing that the c(x, y) in (27) becomes c(η, ξ) = 1 if ξ = auη for some u ∈ Vn and c(η, ξ) = 0 otherwise. Therefore, the transition probability of reaching a state ξ after adding −1 a particle at u ∈ Vn in a configuration η is Π(η, ξ) = |Vn| if ξ can be written as auη and Π(η, ξ) = 0 otherwise. Generator (37) is the generator of the Markov process that with exponentially distributed intervals between events assigns one particle at random to a site x ∈ Vn and immediately stabilizes the (possibly unstable) resulting configuration. Now, let us look at the class of general systems consisting of normal, dissipative and source sites. These are either stabilizable, metastabilizable or not stabilizable.

DEFINITION 5.2. The general generator of the Abelian Sandpile model is defined as

λ X X L f = λ I(η(u) ≥ ∆u,u)[f(Tuη) − f(η)] + [f(η + δu) − f(η)] (38)

u∈Vn u∈Vn

19 5 TOPPLING NUMBERS, AVALANCHES AND RANDOM WALKS

The interpretation of this generator is as follows. With rate 1, we keep adding particles to the system, and with rate λ, we topple certain sites via legal topplings. It may be the case that the Markov process associated to this system remains bounded, even if the system is not stabilizable. On the other hand, if the system is stabilizable, we know that if η is stable, only finitely many topplings are required to stabilize η+δx. This observation motivates the following theorem. Denoting the semigroups ∞ λ associated to the generators (37) and (38) by respectively St and St , we have

THEOREM 5.2. If a system is stabilizable, then for suitable measurable and bounded f

λ ∞ lim St f = St f (39) λ→∞

PROOF. Let f be bounded and measurable. Since the system is stabilizable and Vn is finite, only finitely many topplings are necessary to stabilize an unstable configuration. Denote the number of topplings by N, the i.i.d. toppling waiting times by τi ∼ exp (λ) and the addition waiting times by T ∼ exp (1). Observe that with change of variables, we have λτi ∼ exp (1) ≡ σi. We then have to estimate the probability

N ! N ! X X P τi < T = P τi < T i=1 i=1 N ! 1 X = P λτ < T λ i i=1 N ! 1 X = P σ < T λ i i=1 → 1, when λ → ∞,

PN  since both T ∼ exp (1) and σi ∼ exp (1). For fixed N therefore, we have limλ→∞ P i=1 τi < T → 1.

But TxN ...Tx1 (η +δx) is precisely the definition of ax, so the processes associated to different generators λ ∞ thus converge in distribution when λ → ∞. It follows that limλ→∞ S = St .

5.1 Linking avalanche dynamics to random walks Using the results from previous sections, we have seen that if a system is stabilizable, Dhar’s formula gives us the finite-volume Green’s function in terms of the system matrix ∆Dn,Fn .

Dn,Fn −1 Eµ[n(x, y, η)] = G(x, y) = ∆ x,y (40) Together with the Feynman-Kac formula, we will attempt to derive some bounds of criticality and non-criticality in the dynamics of the Abelian Sandpile model. The definition of criticality will be given later.

The classical case In the classical case of the ASM Dn = Fn = ∅, and therefore the toppling matrix for x, y ∈ Vn is given by  2d, if x = y ∈ V  n ∆x,y = −1, if |x − y| = 1 (41) 0, else

We know that the eigenvalues of ∆ are all positive. Suggestively writing Lx,y = −∆x,y, we may thus write Z ∞ ∆−1 = (−L)−1 = etLdt (42) 0

20 5 TOPPLING NUMBERS, AVALANCHES AND RANDOM WALKS

L is the generator of a continuous-time random walk moving on Vn and killed upon exiting Vn. It has already been demonstrated that for a semigroup St working on a finite state space Ω with corresponding Markov generator L, it holds that

Lt (St)x,y = e x,y = P (Xt = y|X0 = x) (43)

So, we can view L as a generator of some Markov process {Xt, t ≥ 0}. In fact, this is the generator of a continuous-time symmetric random walk in Vn, killed upon leaving Vn. Knowing this, and knowing that L has a purely negative set of eigenvalues, we derive

Z ∞ −1 tL (−L)x,y = e x,y dt 0 Z ∞ = P (Xt = y|X0 = x)dt 0 Z ∞  = Ex I(Xt = y)dt 0

−1 And thus, (−L)xy can be interpreted as the total residence time of a continuous-time symmetric random walk in y starting at x, which is also the Green’s function of a simple continuous-time random walk on Vn.

Example: Random walk in one dimension In the classical Abelian Sandpile Model, ∆ can be written as

∆ = 2(I − P ) where I is a L × L unity matrix and P is the Dirichlet transition matrix of a simple random walk {Xt, t ≥ 0} with trapping boundaries (infinite-capacity sinks). This is equivalent to leaving Vn in the general case. In one dimension, P given by

 0 1/2  1/2 0 1/2         ......  P =  . . .  (44)        1/2 1/2 1/2 0

This transition matrix represents a trapped random walk in Z, where the random walk gets ‘killed’ at some points ±n ∈ Z, as is shown in Figure 4. This is equivalent to saying that a random walk gets killed by

21 5 TOPPLING NUMBERS, AVALANCHES AND RANDOM WALKS

Trapped simple random walks 100

80

60

40

20

t 0 X

-20

-40

-60

-80

-100 0 1000 2000 3000 4000 5000 t

Figure 4: 20 realizations of trapped simple random walks on Z, starting from 0 and trapped when reaching |n| = 100.

Let us look further at the matrix P . Because the L × L matrix P is tridiagonal and Toeplitz, we have the well-known set of eigenvalues of P :

λ(P ) = {cos(kπ/(L + 1)), k = 1, ..., L} (45)

So it is true that every eigenvalue of P satisfies |λi| < 1. Consequently, we can write it as a geometric series: ∞ −1 X 1  P n = (I − P )−1 = ∆ = 2∆−1 (46) 2 n=0 and therefore ∞ 1 X ∆−1 = P n (47) 2 n=0 This immediately gives another insight in the connection between the Abelian Sandpile model and random walks. The Dirichlet transition matrix of a simple random walk prescribes the expected values of visits of the random walk via the relation ∞ 1 X P n = ∆−1 ≡ G(x, y) (48) 2 xy xy n=0

The general case In the general case, we can write the system matrix as

Dn,Fn ∆ = ∆ + VD = −L + VD (49)

where ∆ denotes the classical lattice Laplacian and VD is a diagonal matrix (the sink/source potential) with  0, if x ∈ C  n VDx,x = 1, if x ∈ Dn  −1, if x ∈ Fn The reader may notice the similarities between equation (49) and equation (29). Equations like this frequently appear in the Parabolic Anderson Model (PAM). One should not simply be unmoved by

22 5 TOPPLING NUMBERS, AVALANCHES AND RANDOM WALKS

these things coming together so nicely! Indeed, carrying out the same derivation as before, assuming convergence of the right hand side of E1. 50, we arrive at ∞ −1 Z   Dn,Fn  t(L−VD) G(x, y) = ∆ x,y = e dt (50) 0 x,y In order to be able to use the Feynman-Kac formula, we define the indicator function

fy(z) = δy,z = I(y = z) (51) which is bounded and measurable. The Feynman-Kac formula now tells us that ∞ −1 Z h R t i Dn,Fn  − 0 VD(Xs)ds G(x, y) = ∆ x,y = Ex e fy(Xt) dt (52) 0

Interpretation of previous derivations In the previous sections, we have started with the defini- tion of Markov processes, their semigroups and generators. Via Dhar’s formula, we were able to link Dn,Fn the expected number of topplings on a site in Vn to the system matrix ∆ . Minus the classical d system matrix ∆ is also known as the generator of a continuous-time symmetric random walk on Z with transition rate 2d. Thus, we have found a connection between the avalanche dynamics in the d Abelian Sandpile model and the theory of continuous-time random walks in Z . Thereby, the random walk can be interpreted as the propagation of the toppling dynamics. Moreover, the extra potential VD causes a ‘birth/killing’ factor in the continuous time random walk on VD, which has been made quantitative using the Feynman-Kac formula.

5.2 Towards infinite volume Until now, we have worked with the finite Abelian Sandpile model. Since in [5], criticality is defined in terms of the expectation of avalanche sizes being infinite, it is necessary to focus on the infinite volume limit of the ASM. Redig discusses many aspects of this infinite volume limit in section 4 of [5]. Does the thermodynamic limit

lim µVn (53) d Vn↑Z exist? Usually in the context of statistical physics, the existence of a thermodynamic limit is based on local considerations. However, since the classical ASM has a very non-local behaviour, we have to look at other arguments. Let us begin by defining the notion of avalanches. Thereafter, criticality is defined in terms of avalanche size expectation being infinite. However, this is a difficult subject. In physics literature, criticality of often defined as the emergence of power-laws, such that events take place in all time and length scales. It seems that BTW also take this position. Criticality is also recognizable if finite correlation lengths are absent, or spanning clusters appear. In the chapter about physical aspects of the Abelian Sandpile model, we introduce some other models that show critical behaviour. For now, let us build upon the knowledge of previous chapters to define criticality. Firstly, we define the concept of avalanches for a finite-size system. Recall that if a system is stabilizable, for stable η ∈ Ω the identity Dn,Fn η + δx − ∆ n(x, ·, η) = axη (54)

holds, where the column n(x, ·, η) is indexed by elements y ∈ Vn. Then we define the avalanche cluster by

DEFINITION 5.3. An avalanche cluster at a site x ∈ Vn is defined as

CVn (x, η) = {y ∈ Vn : n(x, y, η) > 0} (55)

Thus, an avalanche CVn (x, η) is the set of sites that topple at least once when a particle added at x ∈ Vn upon a configuration η.

23 5 TOPPLING NUMBERS, AVALANCHES AND RANDOM WALKS

By the Markov inequality, we then have an upper bound on the probability of toppling of a site y when ax is applied to η:

P (y ∈ CVn (x, η)) = P (n(x, y, η) ≥ 1) (56) ≤ Eµ[n(x, y, η)] (57) = G(x, y) (58)

Looking back at (52) and (55), we can sum over y ∈ Vn to obtain a characterization of avalanche sizes. For stabilizable systems, we have by the Feynman-Kac formula ∞ X −1 X X Z h R t i Dn,Fn  − 0 VD(Xs)ds ∆ x,y = G(x, y) = Ex e fy(Xt) dt (59) 0 x,y y y∈Vn y ∞ Z R t − VD(Xs)ds = Exe 0 dt (60) 0

For clarity, we repeat the definition of the general toppling matrix. As always, we denote Cn = C ∩ Vn, Dn = D ∩ Vn and Fn = F ∩ Vn.

Dn,Fn DEFINITION 5.4. The finite-volume toppling matrix ∆x,y restricted to Vn is defined as  −1 for x, y ∈ V , |x − y| = 1  n  2d + 1 for x = y ∈ Dn ∆Dn,Fn = (61) x,y 2d for x = y ∈ C  n  2d − 1 for x = y ∈ Fn

We denote by S the stabilization operator working on height configurations on Vn with the above toppling matrix (61). As before, we call CVn (x, η) the avalanche cluster: the set of sites that topple at least once when adding a particle at x on a configuration η. We also define µ as the uniform measure Dn,Fn on the set of recurrent configurations R corresponding to the system matrix ∆x,y as defined in (61).

Following Redig [5], we call the model critical if the expected avalanche size is infinite. Non- criticality is then defined as

DEFINITION 5.5. A system is non-critical if

lim sup Eµn [|CVn (x, η)|] < ∞ (62) n→∞ Thus, a system is called non-critical if the expected avalanche sizes remains bounded even in the thermodynamic infinite-volume limit. Recalling the equivalent characterisation of avalanche sizes (60) combined with Markov’s inequality, this results in the following lemma. Denote by Gn the Green’s Dn,Fn −1 function restricted to the domain Vn. Then Gn(x, y) = ∆ x,y, by Dhar’s formula, and thus by the Feynman-Kac formula: ∞ Z h R t i − VD(Xs) ds Gn(x, y) = Ex e 0 I(Xt = y) dt (63) 0

As before, {Xt : t ≥ 0} is a continuous-time symmetric random walk walking in Vn with transition rates 2d, killed upon leaving Vn. Accordingly, the criterion for non-criticality can be formulated as in Lemma 5.1.

Dn,Fn −1 LEMMA 5.1. Denoting Gn = ∆ , a system is non-critical if X lim sup Gn(x, y) < ∞ (64) n→∞ y∈Vn

24 5 TOPPLING NUMBERS, AVALANCHES AND RANDOM WALKS

PROOF. This follows from the Markov inequality in Eq. 58, combined with (60) and (62). Since the infinite-volume Abelian Sandpile model with classical toppling matrix clearly exhibits criti- cality, and the totally dissipative model does not (which is proven hereafter), we assume there is some critical density of dissipative sites whereby criticality is lost. These and other questions are addressed in the next section. It shall be clear that in the case of finite systems, infinite avalanche clusters can never occur. It is at this point that we have to make the transition from finite-state spaces to infinite d state space of the random walk: V = Z . Some things derived in previous sections are not valid any- more. For example, the group structure of addition operators may be lost. For a profound discussion on stabilisation difficulties in the infinite-volume limit, we refer the reader to the paper by Fey, Meester and Redig [7] or by Redig and Fey [10].

25 5 TOPPLING NUMBERS, AVALANCHES AND RANDOM WALKS

5.3 Estimating critical behaviour We now continue by substituting the Feynman-Kac formula in the expression for non-criticality in the case of F = ∅. As a warm-up example, we consider the classical case and the totally dissipative case. d We write VD as the potential now, which can be regarded as the indicator function for the set D ⊂ Z .

d Classical case For the classical case, the additional potential VD is zero for all x ∈ Z . By the d Feynman-Kac formula, we then have for y ∈ Z : ∞ Z h R t i X X − VD(Xs) ds G(x, y) = Ex e 0 I(Xt = y) dt 0 y∈Zd y∈Zd X Z ∞ = Ex[I(Xt = y)] dt 0 y∈Zd = ∞ so by Lemma 5.1 we do not have non-criticality in the classical case.

Totally dissipative case When a system is totally dissipative, we can write the potential VD as VD(x) = γ for all x ∈ Vn. The factor γ is a generalization from the case we have considered before, but it serves as an illustration that a constant dissipative system implies exponential avalanche size damping, and thus clearly non-critical behaviour. ∞ Z h R t i X CRWn − VD(Xs)ds lim sup Gn(x, y) = lim sup Ex e 0 I(Xt ∈ Vn) dt n→∞ n→∞ 0 y∈Vn Z ∞  −γt ≤ Ex e dt 0 Z ∞ = e−γtdt 0 1 = γ

Local times Notice that we can rewrite the exponential in the Feynman-Kac formula as Z t Z t VD(Xs) ds = I(Xs ∈ D) ds (65) 0 0 Rewriting (65) in terms of local times, we have Z t VD(Xs) ds = lt(D) (66) 0 R t d where the definition lt(A) = 0 I(Xs ∈ A) ds has been used for any set A ⊂ Z . For t large enough, one can use the asymptotics of continuous-time random walk local times.

5.4 From a CRW to a DRW First let us state the following lemma about two exponentially distributed random variables:

LEMMA 5.2. If X ∼ exp (λ) and Y ∼ exp (ν) are independent exponentially distributed random vari- ables, then ν P (X ≥ Y ) = (67) λ + ν

26 5 TOPPLING NUMBERS, AVALANCHES AND RANDOM WALKS

This can immediately be seen by the argument Z ∞ P (X ≥ Y ) = P (X ≥ x|Y = x)fY (x) dx 0 Z ∞ = (1 − FX (x))fY (x) dx 0 Z ∞ = ν e−(λ+ν)x dx 0 ν = λ + ν This lemma shows us that a CRW moving on a lattice containing dissipative sites indexed by the set D can be converted to a DRW whereby at every dissipative lattice point the random walk is killed with a probability that is directly related to the rates associated to the CRW. The killing rate is 1 when d on a dissipative site, while the transition rate of the CRW is 2d on Z . Therefore, the probability of getting killed at a dissipative site is given by the probability that the ‘killing clock’ activates earlier than the ‘transition clock’. These clocks have i.i.d. exponentially distributed activation times. By Lemma 5.2, the killing probability is given by 1/(2d + 1). Therefore, the survival probability is given by 1 − 1/(2d + 1) = 2d/(2d + 1). This will help us in converting the CRW in a DRW.

LEMMA 5.3. By the Feynman-Kac formula, the Green’s function on Vn becomes Z ∞ −t∆Dn,Fn Gn(x, y) = e dt (68) 0 ∞ Z  R t  CRWn − VD(Xs) = Ex e 0 I(Xt = y) dt (69) 0 ∞ l (x) ! 1 X Y  2d  k = DRWn I(X = y) (70) 2d Ex 2d + 1 k k=0 x∈Dn

PROOF. The first equality is just the Feynman-Kac formula; the second equality requires some further explanation. The expression in the large brackets in the third equation represents the probability that the random walk at time k has survived in the lattice Vn until arriving at y at time k, with k ∈ N fixed. Summing over all k ∈ N and taking the expectation starting from x ∈ Vn gives the Green’s function expressed in DRW terms. The random walk is killed upon leaving Vn. lk(x) denotes the number of visits of the random walk to a site x until a time k. More formally, to get from the second the third equality, one has to overcome the fact that at fixed time t, usually not an exact number of steps has been taken. This is overcome by conditioning on the time being in between two transitions of the CRW. Denote by k the number of jumps that the CRW has done until time t. By defining α = P l (z) = l (D ), β = k −α such that α+β = k equals the z∈Dn k k n total time, we introduce the independent random variables T1 ∼Gamma(α, 2d) and T2 ∼Gamma(β, 2d) and τ ∼ exp (2d). This way, we restate the second equation middle term as

R t R t CRWn − VD(Xs)ds CRWn − I(Xs∈Dn)ds Ex [e 0 ] = Ex [e 0 ] CRWn −lt(Dn) = Ex [e ] ∞ Pk X DRWn − i=1 τiI(Xi∈Dn) = Ex [e I(T1 + T2 < t < T1 + T2 + τ)] k=0 ∞ X DRWn −T1 = Ex [e I(T1 + T2 < t < T1 + T2 + τ)] k=0

27 5 TOPPLING NUMBERS, AVALANCHES AND RANDOM WALKS

We therefore must calculate Z ∞ −T1 E[e E[I(T1 + T2 < t < T1 + T2 + τ)]] dt (71) 0 this becomes Z ∞ −T1 E[e E[I(T2 < t − T1 < T2 + τ)I(τ > t − T1 − T2)]] dt 0 Z ∞ −T1 −(t−T1−T2)2d = E[e E[I(T2 < t − T1)e ]] dt 0 Z ∞  Z t−T1 (2d)βxβ−1e−2dx  −T1 −2dr+2dT1+2dx = dtE e I(T1 < t) e dx t=0 x=0 Γ(β) Z ∞ Z t (2d)αuα−1e−2du Z t−u (2d)βuβ−1e−2du = dte−2dt du dx e2du t=0 u=0 Γ(α) x=0 Γ(β)

rearranging terms gives

(2d)α+β Z ∞  Z t  Z t−u     = e−2dt uα−1e−u xβ−1 dx du dt (α − 1)!(β − 1)! t=0 u=0 0 (2d)α+β Z ∞  Z t    = e−2dt uα−1e−u(t − u)β du dt (α − 1)!β! t=0 u=0 Now, by Fubini’s theorem the order of integration may be interchanged while integrating over the lower right triangle on the t-u plane, and it follows that

(2d)α+β Z ∞  Z ∞    = e−uuα−1 e−2dt(t − u)β dt du (α − 1)!β! u=0 t=u by substituting v = t − u we obtain

(2d)α+β Z ∞  Z ∞    = e−(2d+1)uuα−1 e−2dvvβ dv du (α − 1)!β! u=0 v=0 (2d)α−1 Z ∞   = e−(2d+1)uuα−1 du (α − 1)! u=0 (2d)α−1 Z ∞   = e−(2d+1)uuα−1 du (α − 1)! u=0 1  2d α = 2d 2d + 1

Substituting back in the second equation, we arrive at the third equation.

28 6 MATHEMATICAL RESULTS

6 Mathematical results

d In this section, we will work in infinite volume (Z ). Therefore we denote D as the set of dissipative sites and F the set of source sites, and omit the subscript n. Rewriting the derived equation 5.3 and d performing the sum over y ∈ Z , we find that it is a sufficient condition for a system to be non-critical d if for all x ∈ Z , it holds that

∞ l (x) ! X X Y  2d  k DRW I(X = y) < ∞ (72) Ex 2d + 1 k y∈Zd k=0 x∈D

d which can in turn be reformulated by actually carrying out the sum over y ∈ Z and using abbreviated notation as ∞ l (D)! X  2d  k DRW < ∞ (73) Ex 2d + 1 k=0 for all x ∈ V . This in turn can be interpreted as the finiteness of expected value of a survival time. Indeed: consider the random walk which at each dissipative site moves to nearest neighbours with probability 1/(2d + 1) and is killed with probability 1/(2d + 1), and on each non-dissipative site moves ˆ with probability 1/2d to one of the nearest neighbours. Let us denote this walk by Xn, n ∈ N and call T the survival time of this random walk, then we have

∞ l (z) ! X Y  2d  k DRW I(X = y) = ˆ (I(Xˆ = y)I(T > k)) Ex 2d + 1 k E0 k k=0 z∈D

Therefore, (73) is implied by

∞ X X ˆ ˆ X ˆ ˆ Ex(I(Xk = y)I(T > k)) = Ex(I(T > k)) ≤ Ex(T ) y k k=0

I.e., the model is non-critical if the expected survival time of the random walk with traps on the dissipative sites (as described above) is finite.

6.1 Conditions on D to obtain non-criticality

d Our first result will concern a infinite system in Z with a finite amount of normal sites, and no source d sites. So, by denoting the set of dissipative sites by D we have Z \D is a finite set. Then:

d d THEOREM 6.1. If Z \D is a finite set, the system is not critical. That is, for all x ∈ Z :

∞ l (D)! X  2d  k DRW < 0 (74) Ex 2d + 1 k=0

PROOF. We follow Redig [11] to prove (74). We take x = 0, the general case is analogous. Since the expectation operator is linear,

∞ l (D)! ∞ l (D)! X  2d  k X  2d  k DRW = DRW < 0 (75) E0 2d + 1 E0 2d + 1 k=0 k=0

29 6 MATHEMATICAL RESULTS

Choose α ∈ (0, 1). We split the calculation into two parts by splitting the expectation

 lk(D)! DRW 2d E0 2d + 1 ! !  2d lk(D)  2d lk(D) = DRW I(l (D) > αk) + DRW I(l (D) ≤ αk) E0 2d + 1 k E0 2d + 1 k  2d αk ≤ + DRW (I(l (D) ≤ αk)) (76) 2d + 1 E0 k

DRW DRW So it now suffices to estimate the probability E0 (I(lk(D) ≤ αk)) = P0 (lk(D) ≤ αk) and show c that it is summable in k to obtain our result of non-criticality. Notice that because lk(D) + lk(D ) = k equals the total time, this amounts to estimate the probability

DRW c E0 (I(lk(D ) ≥ (1 − α)k)) Because Dc is finite, we have the following non-optimal bound on local time tails from lemma 1 section 3 of [12]: there exists a, b > 0 such that for all δ > 0

1/2+δ −bkδ/2 P0(sup lk(x) > k ) ≤ ae x c c As a consequence, choosing ϕ(k) = αk for some α ∈ (0, 1), and using lk(D ) < |D | supx lk(x), because supx lk(x) ≥ lk(x) for all x, we obtain   c (1 − α)k 0 −b0k1/4 P(lk(D ) ≥ (1 − α)k) ≤ P sup lk(x) ≥ c ≤ a e (77) x |D | which is summable in k.

From the proof we can derive the following criterion of non-criticality. Note that we only used the fact that Dc is finite about halfway through the proof of (74).

PROPOSITION 6.1. If the set of dissipative sites is such that there exists a function ϕ : N → R such that 1. ∞ ϕ(k) X  2d  < ∞ 2d + 1 k=0

d 2. for all x ∈ Z X DRW c Px (lk(D ) ≥ k − ϕ(k)) < ∞ k then the model with dissipative sites in D is non-critical. The first part of this proposition provides the upper-bound estimate for the local times, while the second part estimates the lower bound. For the case Dc finite, the easy choice was ϕ(k) = αk. As

discussed earlier, one can view the avalanche propagation as a random walk that is killed upon leaving its ˆ lattice or upon hitting a dissipative site. Let us denote this killed random walk by Xn, n ∈ N. Because ˆ whenever a dissipative site is hit, the random walk Xn, n ∈ N is killed with probability 1/(2d + 1), it is clear that whenever the hitting time of the ordinary random walk of the set of dissipative sites has ˆ a finite expectation, then so does the survival time of the killed walk Xn, n ∈ N. In that case, it will visit the set D infinitely many times with probability one. This leads to the following proposition.

30 6 MATHEMATICAL RESULTS

ˆ PROPOSITION 6.2. Let Xn, n ∈ N denote the killed walk starting from the origin and Xn, n ∈ N the d d ordinary walk starting from origin. Let D be a subset of the lattice Z . Denote for x ∈ Z

τx(D) = inf{n ≥ I(x ∈ D): Xn ∈ D − x}

the hitting or return time of D − x. Furthermore, let τ(D) = supx τx(D). If DRW E0 (τ(D)) < ∞ (78) then the model with dissipative sites in D is non-critical. ˆ PROOF. The survival time T of the walk Xn, n ∈ N starting from x is a sum N X T = τxi (D) i=1

where N is geometric with parameter 1/(2d + 1), independent of τxi , i = 1, 2,... and where τxi (D) denote the successive hitting or return times into the set D. This sum represents the survival time by adding the individual hitting times of the set DN times. The first N − 1 times the walk is not killed upon entering a site in D, but the last time it is. Thus, N ∼Geo(1/(2d + 1)). Therefore, because

τ(D) ≥ τxi for all xi and E[N] = 2d + 1

N X E[T ] = E[τxi (D)] ≤ (2d + 1)E(τ) < ∞ i=1

PROPOSITION 6.3. If D is such that

sup inf |x − y| = R + 1 < ∞ y∈D x∈Zd d i.e., every point in Z is at most at distance R from a point in D, then (78) is satisfied. d PROOF. Denote for x ∈ Z d B(x, R) = {y ∈ Z : |x − y| ≤ R} upon exiting B(x, R) there is a strictly positive probability that a point of D is hit. This probability is bounded from below by a number κR only depending on R. Indeed denote σB(x,R) the exit time of

B(x, R) then we can choose κR = infz∈B(x,R) infy Pz(XσB(x,R) = y). If no point of D is hit upon exiting B(x, R) then start from the exit point and look at the exit time of the ball with radius R from that point. We see that from any x the hitting time of D is bounded from above by a geometric sum of exit times of balls with radius R, not depending on x. Using 6.2, we find that the model is non-critical.

d COROLLARY 6.1. If D is a sublattice of Z , then the model is not-critical. d PROOF. Let e1, ..., ed be unit vectors spanning Z . Then we can write any sublattice D as D = {y ∈ d Z : y = a1e1 + ... + aded, witha1, ...ad ∈ Z}. It follows that

d 1 X sup inf |x − y| ≤ ai < ∞ d y∈D 2 x∈Z i=1 and by 6.3 we have that the model is not critical.

31 6 MATHEMATICAL RESULTS

6.2 Adding sources A source site is a non-boundary site becoming unstable earlier than a conservative site. Upon toppling, one grain is given to each neighbour, hence grains are created upon toppling. In order to make the model well-defined, we will have to consider possibly bigger (than one) dissipation at dissipative sites. Therefore, we consider the more general toppling matrix  −γ for x, y ∈ V, |x − y| = 1   D,F,α,β 2dγ + α for x = y ∈ D ∆x,y = (79) 2dγ for x = y ∈ C  2dγ − β for x = y ∈ F

This means in words that a site has stable height if its height is below 2dγ for a conservative site, 2dγ + α for a dissipative site, and 2d − β for a source site. Upon toppling, 2dγ grains are equally distributed among nearest neighbours. This means that upon toppling, for a dissipative site mass α is lost, whereas for a source site mass β is gained. Notice that stabilization of an unstable configuration might be ill-defined, i.e., infinite legal toppling sequences can occur even in finite volume. However, by the same argument as in the classical case, stabilization is unique and well defined if there exists a stabilizing sequence of topplings, i.e., then all other sequences of topplings end and lead to the same stable result. In that case, we can also apply Dhar’s formula, and follow the line of the previous section. We then define the potential  +α if x ∈ D  VF (x) = 0 if x ∈ C −β if x ∈ F

Dn,Fn then, provided the model with toppling matrix, ∆x,y is well-defined, it is non-critical if for all x we have ∞ Z γ  R t  CRW − VF (Xs)ds Ex e 0 < ∞ (80) 0

γ  R t  CRW − VF (Xs) The expectation Ex e 0 can be interpreted as the total mass at time t > 0 starting from a unit mass at time zero which is splitting at rate one (in two unit masses) on source sites, and killed at rate α at dissipative sites, and besides doing continuous-time random walk at rate γ. Notice that

γ  R t   1 R tγ  CRW − VF (Xs)ds CRW − γ 0 VF (Xs)ds Ex e 0 = Ex e by substitution t → γ.

6.3 Finitely many sources First we look at a finite number of source sites and everywhere else dissipative sites. So

PROPOSITION 6.4. In every dimension d ∈ N, if there is a finite number of source sites, then as γ is large enough, the sandpile model with toppling matrix (79) is not critical.

PROOF. We start with a single source site at the origin. In that case we have

    d  CRW γ  − R t V (X )ds CRW (1) βltγ(0) −αltγ(Z \{0}) e 0 F s = exp exp (81) E0 E0 γ γ

Now since α α α l ( d\{0}) = tγ − l (0) (82) γ tγ Z γ γ tγ

32 6 MATHEMATICAL RESULTS

we can rewrite (81) as  α + β  CRW (1) exp(−αt) exp l (0) (83) E0 γ tγ

R t d where as before lt(G) = 0 I(Xs ∈ G)ds denotes the local time of a set G ⊆ Z . Denote 1 F (µ) = lim log E0 exp (µlt(0)) (84) t→∞ t Notice that this is exactly the free energy of the homogeneous pinning model in d = 1, 2 see [13]. It is well known (cf. [13] theorem 2.10), that around µ ≈ 0 the behavior is

F (µ) = O(µ2)

in d = 1. In particular, 1 lim F (µ) = 0 (85) µ→0 µ Therefore,       CRW α + β α + β exp ltγ(0) ≈ exp tγF + o(t) (86) E0 γ γ by (85) and as γ → ∞ we have α + β  γF → 0 as γ → ∞ (87) γ As a consequence, equation (83) is integrable as a function of t for γ large enough. In d = 2, 3 (85) still holds. In d ≥ 4, there exists µc > 0 such that F (µ) = 0 for µ ∈ [0, µc]. Hence the right hand side of (81) is trivially integrable in t. Finally, if we have a finite number of source sites (|F | = k), then we

need to estimate α + β  CRW exp l (F ) (88) E0 γ tγ The Cauchy-Schwarz inequality now proves useful. For two stochastic variables X,Y one can define an 2 inner product by letting hX,Y i ≡ E(XY ). The Cauchy-Schwarz inequality now states that |E(XY )| ≤ 2 2 E(X )E(Y ). Applied to (88) we have in the case of F = {s1, s2} such that the system is stabilizable:

    d  CRW γ  − R t V (X )ds CRW (1) βltγ(F ) αltγ(Z \F ) e 0 F s = exp exp (89) E0 E0 γ γ just as in (83), we can rewrite this as

 α + β  CRW (1) exp(−αt) exp l (F ) (90) E0 γ tγ

Since F = {s1, s2}, this can be written as

 α + β  α + β  exp(−αt) CRW (1) exp l (s ) exp l (s ) (91) E0 γ tγ 1 γ tγ 2

By the Cauchy-Schwarz inequality, this is bounded above by

  α + β   α + β 1/2 exp(−αt) CRW (1) exp 2 l (s ) CRW (1) exp 2 l (s ) (92) E0 γ tγ 1 E0 γ tγ 2

33 6 MATHEMATICAL RESULTS

Using (86) and by noting that E0(lt(x)) ≤ E0(lt(0)) for x 6= 0 we obtain as an upper bound for (92):

  α + β   α + β 1/2 exp(−αt) CRW (1) exp 2 l (0) CRW (1) exp 2 l (0) E0 γ tγ E0 γ tγ 1  α + β   1  α + β   = exp(−αt) exp tγF 2 + o(t) exp tγF 2 + o(t) 2 γ 2 γ   = exp(−αt) exp γtF (2k(α + β)/γ) + o(t)

which, when γ → ∞, is integrable in t. Finally, extending the above argument for |F | = k while the system is still stabilizable leads to   exp(−αt) exp γtF (2k(α + β)/γ) + o(t)

where k = |F | is the number of source sites. The result then follows again from (87).

34 7 A RENORMALISATION APPROACH TO THE BTW MODEL

7 A Renormalisation Approach to the BTW model

7.1 Introduction: Critical phenomena and renormalisation We now take a different viewpoint of analysing the ASM, which is more physically oriented. A discipline within the field of statistical mechanics is the study of so-called critical phenomena. The model as introduced by BTW falls into that category, since the main cause for which they introduced the model is the emergence of self-organized criticality. In this short section, we briefly explain frequently appearing concept associated with critical phenomena. Thereafter, we will consider the BTW model by a much- used apparatus in the context of critical phenomena: renormalisation theory. It also becomes clear why the BTW model shows self-organised criticality and what sets the BTW model apart from the Ising model or percolation. [14]. After deducing some critical exponents governing the behaviour of the model in its critical state, we try to perform renormalisation with constant dissipation over the lattice. In the next section, we will present our numerical simulations that are performed to numerically analyse the BTW model in one or two dimensions. The section thereafter, we will explain some ideas and examples associated to self-organised criticality: How well is it understood, and can it play an important role in physics today? But first, let us start by introducing some concepts in the field of phase transitions and critical phenomena. This part of our report is mostly based on Chapter 8 of Jos Thijssen’s Lecture notes on statistical mechanics [14] and a standard work on critical phenomena by D. Sornette [15]. The present discussion is by no means meant to be comprehensive, so for detailed information, we refer the reader to these sources.

Critical phenomena Critical phenomena are everywhere present in nature. Perhaps the most known is the phase transition of water into vapor or ice. Who has not tried the famous experiment of dropping a few grains of sand into a super-cooled glass of water? It instantaneously freezes. Phase transitions of water are characterized by a sudden non-linear change of density at the boiling point and a sudden emergence of order near the freezing point. It seems strange that nature exhibits almost discontinuous behaviour near a critical point, because in classical mechanics we are used to Newton’s laws which are smooth and analytic most of the time. It was during the twentieth century that scientists began to realize that the combined behaviour of very large quantities of matter controlled by a parameter could become very chaotic of the parameter reached a critical value. At this value a physical singularity is reached, resulting in nonanalytic behaviour. We now list a number of subjects often encountered in the study of critical phenomena. When a critical point in some system is reached, certain physical quantities are governed by critical exponents. This behaviour is different from the far-from-critical point behaviour where mean field approaches hold. Since various physical quantities are related by laws of nature, these exponents cannot be defined independently. This is where scaling laws come in: they form relations between critical exponents to fulfil the laws of nature. It is also known that the correlation length, which is a measure of the order or correlation in the system, diverges when a critical point is reached. The correlation length then shows characteristic power-law behaviour. These power laws are heavy-tailed distributions and are typical for critical phenomena. Finally, the concept of universality applies to different models which are characterized by the same fixed-point in their critical state. These models are said to belong to the same universality class. For more information about these topics, we strongly recommend [15].

Renormalisation in Percolation: An introductory example We introduce the method of renor- malisation via an easy example. Renormalisation is a technique originally developed in Quantum Elec- trodynamics (QED) and later applied in fields of statistical physics. It is a method to treat self-similar phenomena which show the same kind of behaviour on many scales. We will apply renormalisation on an easy variant of percolation to illustrate the procedure. A classic example of percolation is as follows. We have a two-dimensional triangular lattice called V . A site on this lattice can take two

35 7 A RENORMALISATION APPROACH TO THE BTW MODEL

states: value 1 and 0. A cluster is loosely defined as a connected set of sites taking the same value. In the model, there is a parameter p ∈ [0, 1] such that determines the global probability of a site being in state one or zero. In the model, p can be varied in order to observe different behaviour of connected paths. In fact, there is a critical value for p which we denote by pc for which self-similar path nature exists: spanning clusters exist with probability one and clusters lack a characteristic size. If this critical behaviour takes place, we say that the model has had a ‘percolation transition’. There is essentially one important parameter in this model: p. We now apply a renormalisation procedure to obtain the critical percolation probability pc: the value of p that causes criticality to emerge. In order to do so, let us start by considering the atomic (smallest) scale first. The method of renormalisation works by iteratively changing scales and defining relations between different scales. We call the probability of an m atomic cell taking the value one p(0). By denoting atomic sites as x(0), where m runs over the lattice, we have (0) (0) p = P (xm = 1)at scale 0 (93) it is now natural to relate the scale-dependent parameter p(1) to the atomic parameter p(0) by the following rule.

Figure 5: A coarse-grained cell takes value one if and only if two or more subcells take value one.

(1) (1) If two or more neighbouring smaller sites of the coarse grained site xm have value one, xm = 1, (1) otherwise xm = 0. There are four possibilities of achieving this: All three small sites have value one, or three rotational-symmetric cases where only two sites have value one. By taking into account independence of the individual values at the same scale, we arrive at therenormalisation transformation

3 2 p(k+1) = R(p(k+1)) = p(k) + 3p(k)(1 − p(k)) (94)

In renormalisation, one looks for fixed points in the transformation. Indeed, a fixed point is a point that is left invariant by the transformation. Under the assumption of scale-invariance these fixed point give vital information about the nature of the critical state of a model. Looking at (94), we find the fixed points p = 0, p = 1 and p = 1/2. These correspond to the whole system taking value 0, 1, or both. Indeed, pc = 1/2 is the critical parameter we were looking for. This fixed point of the renormalisation transformation determines the large-scale behaviour of the model.

7.2 General remarks Several attemps have been made to apply the theory of renormalisation to the Abelian Sandpile model in two dimensions, such as in the paper by Vespignani, Zapperi and Pietronero [16]. This section is organised as follows. Firstly, we discuss some properties of the ASM indicating that a ‘standard’ renormalisation procedure as in the Ising model will not prove successful. We will also introduce some

36 7 A RENORMALISATION APPROACH TO THE BTW MODEL

general scaling laws as an ansatz. These describe the nature of the critical state in the ASM. Secondly, we will determine the renormalisation equations in detail, by linking a generic scale b to a scale 2b. Finally, we will try to analyse the uniformly dissipative case of the ASM using a renormalisation approach. From a mathematical point of view, we have already proven that criticality is lost when introducing a uniform density of dissipative sites. Will this result be reflected in the renormalisation approach? The emergence of SOC in the Abelian Sandpile model introduces difficulties in the renormalisation procedure that are not encountered in, for example, the Ising model. Namely, the renormalisation group flow is completely attractive due to the lack of any control parameters like temperature. Indeed, this is precisely the phenomenon described by BTW as self-organized criticality. Vespignani et al also analysed the behaviour of DLA-like problems with a renormalisation scheme approach. While the notion of self-organised criticality is also present in these kind of problems, the renormalisation approach differs substantially. This shows the difficult nature of self-organizing phenomena: in fact, some scientists believe that a more general theoretical framework is needed to treat problems like DLA and the ASM in a consistent way. [16] In order to characterize the critical state of the ASM, a set of critical exponents have been defined. It is known numerically, physically and conjectured mathematically that avalanche sizes s show power-law behaviour, as well as avalanche duration times t and avalanche radii r:

−τ fS(s) ∼ s −α fT (t) ∼ t −λ fR(r) ∼ r

Note that s corresponds with the mathematical notion of |CV (·, η)|. Also, it is known that the avalanche time and linear extension of an avalanche follows the scaling law

t ∼ rz (95)

where z is called the dynamical exponent. Using general scaling law arguments [1], one can find relations between critical exponents of the BTW model. In two dimensions, we have

λ = 1 + 2(τ − 1) 2(τ + 1) α = 1 + z

This way, it is possible to define τ and z as independent critical exponents. We now turn to the micro-analysis of the ASM at a generic scale b and try to formulate a renormalisation transformation to link it to a scale 2b. This way, we will be able to estimate the critical exponents τ and z. Because analysing avalanche radii R is not part of our project, we will be satisfied when we have estimated τ.

7.3 Renormalisation equations

We follow the approach of Vespignani et al. [16] by considering three kinds of heights in the model, in the original two-dimensional case described by Bak, Tang and Wiesenfeld. Of course, in the original model a stable site can have a height contained in the set {0, 1, 2, 3}, whereas a height of 4 causes the site to topple. However, for the renormalisation procedure, it is convenient to distinguish only three different ‘states’ of a coarse grained site. These are referred to as stable, critical and unstable, and are described in Figure 6.

37 7 A RENORMALISATION APPROACH TO THE BTW MODEL

Figure 6: Classifications of cells in the renormalisation of the Abelian Sandpile model, together with an illustration of a possible series of events.

Of course in the original scale of the model, these three states directly correspond to a given height. Indeed, a height 0, 1 or 2 correspond to a stable site, whereas 3 corresponds to a critical site and 4 to an unstable site. We define a stable site as a site whose height is far from its toppling height: upon adding a grain at a stable site, it will not topple. Critical sites will topple when a grain is added, whereas unstable sites have a strictly higher height than the critical height. Let us consider a ‘generalized sandpile model’. To a cell, we assign a size b. We add a ‘quantum of mass’ δE(b) to a coarse grained cell of size b. Upon addition of this quantum of energy, internal relaxations in de generalized cell of size b can occur, but as long as these relaxations do not influence other coarse grained cells we call the cell stable. Conversely, a cell is called critical if it influences one or more surrounding cells if a quantum of energy is added. It is now natural to introduce a parameter ρ(b) that gives the density of critical cells at a generic scale b of coarse graining. This parameter differs from the temperature in the Ising model in the sense that it is not a relevant control parameter; ρ automatically reaches its equilibrium value exactly at the attractive critical point, without any external tuning. Now, we postulate that addition of a quantum of energy at scale b to a critical cell can lead to four different events.

Figure 7: The four different relaxation events. Note that the transfer of mass from a relaxation event not necessarily makes the stable cells critical.

We denote these four different events by the probability vector at a scale b: X P (b) = (p1, p2, p3, p4); with pi = 1 (96) i

In this vector, p1 is the probability that a relaxation of a site at scale b influences just one other site at that scale, p2 the probability of influencing two neighbouring sites, and so on. When speaking about relaxation of a cell at scale b, we mean that the subrelaxations at smaller scales span the entire cell at scale b and on top of that influence one or more surrounding cells. The probability vector P (b) then characterizes the phase space for the relaxation dynamics at a generic scale b of coarse graining. For example, in the original BTW model [1], on the original scale, the probability vector is always given by P = (0, 0, 0, 1). Vespignani et al. claim that the quantity (ρ(b); P (b)) fully characterizes the behaviour

38 7 A RENORMALISATION APPROACH TO THE BTW MODEL

of the model, and is therefore a good candidate to perform renormalisation on. It turns out that writing down the full renormalisation transformation is extremely complex and requires several pages. This is something we had not expected, but we shall sketch the basic approach. Looking at the square two-dimensional lattice, we can divide a generic scale b cell into four subcells of scale b/2. Every cell at the larger scale can then be characterized by the number of critical subcells it contains. We will denote this quantity by α ∈ {0, 1, 2, 3, 4}. Considering a uniform density ρ of critical cells at every scale, the probability of having α critical subcells at critical density ρ is given by

α 4−α Wα(ρ) = nαρ (1 − ρ) (97) where nα is a normalisation factor. This is a natural weight function, since in the critical state each cell can be viewed as critical with probability ρ. In the exact renormalisation equations, a scale parameter has to be added to this weight function W , but in the stationary state of scale-invariance the density of critical cells ρ will not change under renormalisation, as we will see in due time. It is a choice to view the height of subcells as i.i.d. throughout the lattice. In reality, not all configurations are allowed in the stationary critical state, because a forbidden subconfiguration could be formed. However, an essential part of this renormalisation approach is to view the whole lattice in a sort of mean-field approximation. To analyse it otherwise would turn out far more difficult, if not impossible.

Figure 8: The four different possible starting configuration from which the renormalisation equations can be deduced.

As we shall see, the values 0 and 1 for α are not included in the renormalisation equations and are therefore not necessary to analyse. The normalisation factor accounts for the number of times a certain starting configuration can be formed essentially differently. When α = 2, four different configurations are possible where the critical cells are next to each other (i.e. not diagonally oriented). When α = 3, also four different starting configurations are possible. It is useful the distinguish between different orientations of unstable subcells in the case α = 3 for later calculations, resulting in the factors 4/3 and 8/3. When α = 4, only one unique configuration exists. In order to define the renormalisation transformation, we assume that we add one unit of mass at scale b/2 to a critical subcell of scale b/2. By definition, it becomes unstable and will affect neighbouring (k) subcells. We refer to a relaxation process denoted by pn as the event that one cell at scale b/2 (k+1) influences n neighbouring cells. Accordingly, pm refers to the event that one cell at scale b influences m neighbouring cells at the scale b. In order to have a well-defined procedure, we commit ourselves to two basic rules: • Each critical subcell (at scale b/2) relaxes if it receives mass from one or more neighbouring subcells. Relaxation influences up to four surrounding subcells. • We only consider the connected series of relaxations that span the larger cell in the renormalisation dynamics. This ensures the connectivity of avalanches. Moreover, it ensures that the events at a

39 7 A RENORMALISATION APPROACH TO THE BTW MODEL

larger scale are actually larger than events at smaller scales. It is a very important rule, but it also involves a choice and thereby a inaccuracy. Renormalisation often involves such generalizations.

Let us discuss one easy case of a renormalisation equation in this manner. Here α = 2. This is the most easy example of calculating a renormalisation equation arising in the context of the BTW model . The probability of an event at a scale b which influences just one neighbouring cell at scale b, which we (k+1) as before denote by p1 , is expressed in events in the b/2 scale by the renormalisation equation

1 1 1 2 1 p(k+1) = ( p(k) + p(k))( p(k) + p(k) + p(k)) (98) 1 4 1 6 2 2 1 3 2 2 3 1 1 1 1 + ( p(k) + p(k))( p(k) + p(k)) (99) 6 2 4 3 2 1 6 2 1 1 3 1 1 + ( p(k) + p(k))( p(k) + p(k) + p(k)) (100) 6 2 4 3 4 1 2 2 4 3

Let us make clear what is going on here. The cell-spanning rule demands that two connected relaxation events of the subcells must take place within the coarse-grained cell. Since α = 2, these must be next to or above each other, since diagonally placed critical sites will not span the cell when toppling. To make matters more clear, we represent bigger cells by the four lines depicted in Figure 9.

Figure 9: Simplification in order to visualize the dynamics. The nodes of the right figure correspond to the subcells in the original picture.

The following sketches explain part of the easiest renormalisation equation (99).

40 7 A RENORMALISATION APPROACH TO THE BTW MODEL

Figure 10: Detailed explanation of the micro-relaxations contributing to the coarse grained relaxation (k+1) p1 .

(k) The events depicted in Figure 10 are formed by a first relaxation p1 of the unstable cell towards the other critical cell, which happens with probability 1/4. Thereafter, the different events lead to the bold-faced terms of equation (99). Note that exactly one arrow points outside the bigger cell, which corresponds to a p1-event. The terms on the right, which are the sums of the left probabilities, appear in equation 99. The reader may notice that only few prefactors appear in these figures. These stem from the fact that just one p1 event arrow always occurs with probability 1/4. A p2 event has a total of 6 different manifestations, from which it follows that a particular p2 event occurs with probability 1/6. For a p3 event the same holds as for a p1 event. Next figure depicts another part of the above renormalisation equation.

41 7 A RENORMALISATION APPROACH TO THE BTW MODEL

Figure 11: Detailed explanation of the micro-relaxations contributing to the coarse grained relaxation (k+1) p1 .

(k) The events depicted in Figure 11 are formed by a first relaxation p3 of the unstable cell towards three other cells: one critical subcell to ensure the spanning property of the subcell relaxations, one (k+1) to a stable subcell and one outside the bigger cell. Therafter, the requirement of p1 to merely influence one other coarse-grained site leaves two possibilities: either the next relaxation influences the already influenced cell or it remains within its original bigger cell. This leads to the bold-faced terms of (k+1) (k+1) (k+1) equation (99). In a similar way, one can also write down the expressions for p2 , p3 and p4 . On top of that, for each α also new equations have to be formulated. Note that α = 0 or α = 1 are not relevant in the renormalisation equations because the second rule of relaxations spanning the coarse grained cell cannot be fulfilled. Vespagiani et al. continue by generalising the argument above by introducing the sets ωn of process (k+1) series at a scale b/2 that are involved in the renormalisation of a pn . These sets each have a weight of fωn . More generally, we thus can write the renormalisation transformation as (k+1) X (k) pn (α) = fωn ({pn0 }) (101) {ωn}α where both n and n0 can take values 1, 2, 3, 4. These equations are normalised such that X X fωn = 1 (102) n ωn

42 7 A RENORMALISATION APPROACH TO THE BTW MODEL

Furthermore, to take into account all possible relevant values for α, one has to sum over α:

4 (k+1) X (k) (k+1) pn = Wα(ρ ))pn (α) (103) α=2 4 (k+1) X (k) X (k) = pn = Wα(ρ ) fωn ({pn0 }) (104) α=2 {ωn}α

Here, Vespagiani et al. remark that the explicit writing of these equations is extremely complex. They have included an appendix providing all relevant constants and parameter to exactly write down the renormalisation equations. As it turns out, the above equations only lead to trivial fixed points, which we have also encountered in the most simple percolation example. We need an extra condition on the renormalisation dynamics. Since no dissipation is introduced yet, we can formulate an equation governing mass conservation on all scales. Constructing a mass balance around a coarse grained cell at scale b, we have

(k+1) (k+1) (k+1) (k+1) (k+1) δE(b) = ρ [δE(b)p1 + 2δE(b)p2 + 3δE(b)p3 + 4δE(b)p4 ] (105) We now arrive at the final form of the renormalisation group for the Abelian Sandpile model:

4 (k+1) X (k) X (k) pn = Wα(ρ ) fωn ({pn0 }) (106) α=2 {ωn}α (k+1) (k+1) (k+1) (k+1) (k+1) −1 ρ = [δE(b)p1 + 2δE(b)p2 + 3δE(b)p3 + 4δE(b)p4 ] (107) where the second equation is just equation (105) divided by the positive number δE(b). Clearly, the added quantum of mass E(b) does not appear in these equations. It is not a relevant parameter in the renormalisation dynamics. The second equation couples the dynamical properties to the stationary ones in a global manner.

7.4 Fixed points Given the equations (107), a fixed point analysis can be done. In order to do this, we start our renormalisation quantities (ρ(b),P (b)) at the smallest, original BTW scale. This means that we initialize our event probability vector as P (0) = (0, 0, 0, 1) and we let ρ(0) > 0. Here, the superscripts refer to the scale at hand, just as in the derivation of the renormalisation equations. The value of ρ at the original BTW scale can be deduced from numerical computations, as well as mathematical derivations [10]. The results of iterating over increasing scales give a converging sequence of numbers. Finally when k → ∞, Vespignani et al. numerically find that the fixed points are given by

(ρ∗; P ∗) = lim (ρ(k); P (k)) = (0.468; 0.240, 0.442, 0.261, 0.057) (108) k→∞ In fact, this fixed point is attractive in the whole state space. It is also striking to see that the two-state Manna model [17] shows the same fixed point, implying their common universality class.

7.5 Critical exponents Now that we have calculated the fixed point parameters of the ASM, we can proceed to derive the critical exponents. This is the information that we were looking for in the fist place. In the familiar Ising model [14], this is done linearising the renormalisation flow diagram around the critical fixed point. However since the ASM fixed point is completely attractive, there are no parameters to tune to

43 7 A RENORMALISATION APPROACH TO THE BTW MODEL

arrive at criticality: it simply emerges. Therefore, we need to use a method that has also been used in a renormalisation analysis of DLA. The non-trivial fixed point that was found is the point at which criticality occurs: there are no charac- teristic time or length scales in the avalanche dynamics. At this point, the power-law behaviour of the avalanche dynamics comes into play. We now try to calculate the avalanche size critical exponent τ. In order to do this, we first have to establish a relationship between the radius and the surface of a typical avalanche: r and s. In two dimensions, this scaling law is given by s ∼ r2. This is also confirmed by the fact that Redig [5] proves that all avalanche clusters are simply connected. Using the power-law ds P f (s) ∼ s−τ , and the fact that = 2r, the probability density of R is given by S dr (1−2τ) fR(r)dr ∼ r dr (109) We now continue by introducing the parameter K that serves as the probability that a relaxation process is limited between two successive scales b(k) and b(k+1) and does not extend to larger scales than b(k+1). This can be formulated as: given that a process measurably occurs at scale b(k), what is the probability that it is does not influence surrounding cells at scale b(k+1)? This probability is given by K = P (b(k) ≤ R ≤ b(k+1)|R ≥ b(k)) R b(k+1) b(k) fR(r)dr = R ∞ b(k) fR(r)dr (k+1)  2−2τ b r b(k) = 2−2τ ∞ [r ]b(k) 2−2τ 2−2τ b(k+1) − b(k) = −b(k)2−2τ = 1 − 22(1−τ) We can now let k → ∞ assuming the well-definedness of this infinite volume limit. Asymptotically then, we have the following relation between K and the self-organized fixed point: ∗ ∗ ∗ ∗ 2 ∗ ∗ 3 ∗ ∗ 4 K = p1(1 − ρ ) + p2(1 − ρ ) + p3(1 − ρ ) + p4(1 − ρ ) (110) The left-hand side was discussed before. The right hand side of (110) is the sum of probabilities that constitute the event described by K: the event of a generic relaxation process at scale b(k+1) not (k) affecting neighbouring cells consists of the events that a p1 process at scale b encounters a stable (k) site, the event that a p2 process at scale b encounters two stable sites and so on. The probability of encountering a stable site is asymptotically and globally given by 1 − ρ∗, which gives (110). At this point we have already assumed k to be very large, so because of scale-invariance we can drop the specific scales and insert the fixed point values. Combining our value of K and (110) while inserting the numerically found fixed-point parameters, we find 1 log (1 − K) τ = 1 − ≈ 1.253 (111) 2 log (2) This value is in good agreement with some numerical simulations on large system sizes, such as Manna [4] and L¨ubeck [21]. However, Bak et al. [1] and Golyk [19] find a value of τ much closer to 1. Our own simulations also find a value of τ almost equal to 1. This difference will be discussed later in the section on numerical simulation. One can also calculate the dynamical exponent z that appears in the z z scaling law t ∼ r by the ansatz tk ∼ (b(k)) , where tk is the average time taken by a relaxation process at scale b(k). In this report however, we have not considered this scaling law and thus refer the reader to [16] for further information.

44 7 A RENORMALISATION APPROACH TO THE BTW MODEL

7.6 Introducing dissipation Now that we have estimated our critical exponents in the non-dissipative case, we continue by consider- ing the uniformly dissipative case. As already mentioned, this is the only case of dissipation that we can analyse using renormalisation group theory. Local variations of dissipation like we have analysed using the mathematical tools of the Feynman-Kac equation are not possible to treat under the assumptions of renormalisation theory, since renormalisation assumes infiniteness of correlation lengths and scale invariance, together with a fixed translation invariant space. To introduce dissipation, we not simply use the kind of dissipation that was analysed mathematically. Rather, we will exploit a continuous dissipation parameter denoted by γ, just as in the proof that a uniformly dissipative Abelian Sandpile is not critical in the mathematical way. It is interesting to see the effects of varying this parameter. One question could be whether global conservation over the entire lattice is the same as the adding both sources and sinks in the system, thereby ensuring global conservation. It is now convenient to denote the total amount of energy lost by a generic cell at scale (k) (k) (k) b upon relaxation by ∆E and the energy really transferred to neighbouring sites by ∆Eout. This way, the global dissipation parameter γ can be written as

∆E(k) γ(k) = 1 − out (112) ∆E(k) Since both our numerical and analytical calculation predict loss of SOC when γ > 0, we can view γ as a control parameter with critical value γ = 0. However, this parameter is not the same as our usual dissipation parameter. In fact, when γ = 0, no dissipation occurs at all. When γ = 1, no relaxation events occur at all. We incorporate the dissipation parameter γ in our renormalisation equations as follows: 4 (k+1) X (k) X (k) (k) pn = Wα(ρ ) fωn ({pn0 }, γ ) α=2 {ωn}α

where the weights fωn are modified such that one or more processes in the conservative weight function have a probability of dissipating mass. Because of this, the subprocesses at scale b(k) will have an additional weight γa(1 − γ)b where a and b are the numbers of processes dissipating and transferring energy, respectively. Some results on the renormalisation of a dissipative sandpile model are known [16], but the behaviour of a dissipative sandpile is by no means fully analysed by them. Here, we will give a few remarks. • Vespignani et al. [16] found that a fixed global dissipation parameter destroys SOC. This agrees with both our analytical and numerical results. However, they use a different definition of dissi- pation: A cell just doesn’t relax by their definition. The definition of dissipation we use in our mathematical derivations is less pervasive. Applying our definition for example, affects multiple parameters in the renormalisation procedure. First of all, the density of critical cells ρ will de- crease, because one more stable height is added when a site is dissipative. It topples at a higher threshold value. • Adding source sites will yield an extra term in the renormalisation equations. The introduction of sources will raise the critical density of critical sites. Because lower height is required for a source site to topple, it could be the case that there exists an equilibrium density of sources and sinks that conserves mass globally, preserving the critical nature of the avalanche dynamics. At this point, we have to look at numerical results to further gain knowledge about critical exponents and underlying distributions of avalanche sizes.

45 8 NUMERICAL SIMULATION

8 Numerical simulation

Numerical simulation constitutes a very important part of this project. Conjectures and insights have been gained through numerical computation, and certain exponents have been estimated. However, numerical simulation can only be used for finite systems. For the transition between finite and infinite systems, only robust mathematical derivations are suitable. This being the case, although only finite computations can be made, we can estimate the global behaviour of the model by iterating the simu- lation over different system sizes to obtain exponents at different scales. In order to simulate the behaviour of the finite volume one- or two-dimensional sandpile, a fast pro- gramming language is required. We used Fortran, an ancient programming language developed in 1957 by IBM. We have programmed in Linux. At first, this proved to be a real struggle to install. However, later on you will get used to the operating system and prefer it over Windows. This is almost a law of nature. To learn the language, Jos Thijssen’s coding notes on Fortran have been used [18]. The benefit of Fortran 95 over for example Matlab is that it is much faster in loops. Actually, Fortran is a programming language, while Matlab is more of a tool, programmed in the programming language C. However, Matlab features some excellent tools for graphics and statistics, so both programmes are used. We also thought about using R, the standard for statistical computing, but R was not able to handle large datasets of several GB very well, while Matlab was.

8.1 Simulating the ASM In general, we simulated the Abelian Sandpile model using Fortran 95. This language works via the Linux terminal. First, we created the code in an editor. Code files come with a .f90 extension, and can globally be divided in two parts: the main part and subroutine/function part. In the main part, you call upon subroutines and functions, possibly defined in the same .f90 file. This is very convenient in order to keep your code neat. Subroutines and functions can be called in each other and are very versatile. The difference between functions and subroutines is that functions when called produce a single output given input parameters, whereas subroutines can be viewed as ‘sub-programs’, running when called in the main part. Fortran also has some very useful intrinsic functions, such as where, any and pack. Together with its total syntax and semantics, it is a very appropriate tool in simulating the BTW model and related statistical physics models. A particularly interesting feature of Fortran 95 is the so-called recursive subroutine. When called, it executes itself and is thereby called ‘in itself’. In the modelling of avalanches, which form a simply connected set of sites, this is a very efficient way to simulate. Indeed, when any interior site x ∈ V is unstable (and it is the only one), it topples. The only possibility of being unstable next are the 2d neighbouring sites. In a natural way, a recursive subroutine is ideal for the job. We only simulated in one or two dimensions. We used L to denote the dimensions of V : so V = 2 {1, 2, ..., L} in the d = 1 case and V = [1,L] ∩ Z in the d = 2 case. The number of total iterations in the model, which is equal to the number of total added particles, is equal to N. Then, the model runs in time. We first initialize the system by assigning a value 2d − 1, 2d or 2d + 1 to each x ∈ V . This matrix of values is saved as distr These numbers represent the critical heights at each lattice point: 2d corresponds to a normal site, 2d − 1 corresponds to a source site and 2d + 1 to a sink site. The initial height configuration η0(x) = 0, ∀x ∈ V is also defined. A uniform stochastic variable X defined on V is realised to determine the x ∈ V where a particle is added. In short:

P (X = x) = |V |−1 for all x ∈ V

The intrinsic function random number(x) does the job. Thereafter, we overwrite η0 by η+δx. This pro- cess is repeated in time, until upon addition of a grain at some x ∈ V , it is the case that η(x) ≥distr(x).

46 8 NUMERICAL SIMULATION

If this is the case, the recursive subroutine of topplings is called which recursively scans the lattice beginning by x. This is much more efficient than looping through the entire lattice V . When the system is stabilizable, we know that the recursive subroutine will terminate after a finite number of ‘sub-iterations’. Otherwise, the program returns an segmentation fault, indicating infinite loops.

Figure 12: A typical sandpile in two dimensions in its critical state. In this simulation L = 100. To plot η, we have used a plotting package programmed by Jos Thijssen.

The time complexity of the algorithm was tested and was found to be of the order of NL2. This is to be expected since we are dealing with a simulation that is linear in N and quadratic in L. However, for large L it seems that the time complexity increases with L. In that case, avalanches tend to span the lattice but also cross the interior of the lattice multiple times due to the relative large distance of the interior to the boundary of the system.

Time Complexity 10 5 Classical BTW model Dissipative BTW model

10 4

10 3 t

10 2

10 1

10 0 10 1 10 2 10 3 L

Figure 13: The time complexity of our main simulation code, plotted on log-log scale.

47 8 NUMERICAL SIMULATION

Extracting data The toppling numbers nx as defined before, are saved at each iteration. This yields a N × 3 vector containing:

• In its fist column, the total toppling number size vector ti upon addition of a grain at iteration i, 1 ≤ i ≤ N is saved. In the two-dimensional simulation, we essentially save L2 integers at each P iteration 1 ≤ n ≤ N, containing the toppling numbers ni(x) = x∈V I(x topples). Then, the P total toppling number variable at iteration i is given by ti = x∈V ni(x).

• In its second column, we save the avalanche size si in iteration i of the process. This quantity is equal to the number of affected sites and can be viewed as identical to the mathematical notion |CV |. Since a site can topple more than one time in an avalanche, we always have the inequality si ≤ ti.

• In its third column, we save the number of particles bi falling off the boundary of V at iteration P i. So bi = x∈∂V ni(x). This quantity is particularly interesting for real sandpile experiments as it is the easiest quantity to analyze: one simply has to weigh the sandpile in time to obtain these avalanche size variables.

When the process runs, one thus obtains a sequence of realisations of random variables. To avoid complexities, I use T , S and B to denote the stochastic variables of the above enumeration respectively. The raw data are exported to a .txt file and imported in Matlab. The datasets are relatively large: 360MB for N = 107. Because the superior statistical tool R could not handle datasets that large, we have used Matlab.

Visualizing raw data This way, we have almost N realisations of stochastic variables representing the different avalanche size characterizations. Not exactly N, because in the build-up of the system the behaviour is not critical yet. From numerical considerations, it follows that after approximately 2.125 × L2 iterations, the behaviour of the ASM becomes critical. This is in good agreement with previous analytic results, in which it was found that the mean mass per site in the critical state is exactly given by 2.125 in two dimensions.

48 8 NUMERICAL SIMULATION

×10 4 4

t 2

0 0 0.5 1 1.5 2 2.5 3 i ×10 5 6000

4000 s 2000

0 0 0.5 1 1.5 2 2.5 3 i ×10 5 150

100 b 50

0 0 0.5 1 1.5 2 2.5 3 i ×10 5

Figure 14: Raw avalanche size data. The upper plot shows total toppling number sizes ti in a small interval in the critical state. The middle plot shows the number of affected sites si in the same interval, and the bottom plot shows the number of particles falling from the boundary bi in time. The depicted interval is much smaller than the actual simulation time, such that the build-up phase of the system is clearly visible. In physics literature, signals like this are often referred to as ‘1/f noise’.

According to the paper of BTW, these different avalanche quantities are distributed as power laws, indicating critical behaviour. This was indeed found, as is shown in the following figures. However, the quantity of particles falling from the boundary bi does not exhibit clear power-law behaviour. Later, we will provide a more detailed analysis of these data.

49 8 NUMERICAL SIMULATION

L = 80, N=10.000.000, classical 10 0

10 -1

10 -2

10 -3 (t) T f 10 -4

10 -5

10 -6

10 -7 10 0 10 1 10 2 10 3 10 4 10 5 Total toppling numbers T

Figure 15: Power-law distribution of total toppling number sizes t. In this simulation, 10.000.000 grains have been dropped on a 80 × 80 lattice consisting of only normal sites. The behaviour at large t is a finite-size effect.

L = 80, N=10.000.000, classical 10 0

10 -1

10 -2

10 -3 (s) S f 10 -4

10 -5

10 -6

10 -7 10 0 10 1 10 2 10 3 10 4 Avalanche size s

Figure 16: Power-law distribution of the number of affected sites s. In this simulation, 10.000.000 grains have been dropped on a 80 × 80 lattice. The behaviour at large s is a finite-size effect, which is well explainable given the total number of sites L2 = 6400. It is clear that the inequality t ≤ L2 must always be satisfied.

50 8 NUMERICAL SIMULATION

L = 80, N=10.000.000, classical 10 0

10 -1

10 -2

10 -3 (b) B f 10 -4

10 -5

10 -6

10 -7 10 0 10 1 10 2 10 3 Number of falling grains from boundary b

Figure 17: It seems that number of particles falling off the rectangular lattice at each avalanche don’t follow a power-law distribution. In this simulation, 10.000.000 grains have been dropped on a 80 × 80 lattice.

Simulations on uniformly dissipative systems When uniform dissipation is introduced, critical behaviour is lost. This can be seen nicely in the following figure, where we simulated different degrees of dissipation on a 80 × 80 lattice.

51 8 NUMERICAL SIMULATION

L = 80, N=10.000.000, Dissipation 10 0

10 -1

10 -2

10 -3 (s) S f 10 -4

10 -5

10 -6

10 -7 10 0 10 1 10 2 10 3 10 4 Avalanche size s

Figure 18: Simulation of the ASM with various amounts of uniform dissipation: none, 1%, 2%, 4%, 8%, 16%, 32%, 64%, 100%. In these simulations, 10.000.000 grains have been dropped on a 80 × 80 lattice.

This system size has been chosen such that finite-size effects do not influence a large part of avalanche size data, while at the same time enough data can be generated in a reasonable amount of time. We observe that the avalanche sizes S now follow another distribution. It is neither an exponential distribution nor a power-law distribution. In fact, we were not able to deduce a well-known distribution behind these avalanche size data. Further investigation is required.

Verification of conjectures In the section concerning the one-dimensional ASM with the effects of sinks and sources, we conjectured that the stabilizability of a system can be linked directly to the D ,F toppling matrix ∆ n n . To test this, we performed simulations on the lattice V = [−n, n] ∩ Z where n could take varying values. The conjecture has never been disproved. For example, we performed simulations on a one-dimensional ASM with n = 25, where site i and i + 1 were σ and ν, respectively, and the rest were δ sites. We found that stabilizing behaviour is highly dependent on i. For larger |i|, the total number of topplings starting from η0(x) = 1 for all x ∈ Vn increased exponentially. For i = 1, we had just one toppling, for i = 10 we already had 46345 topplings in total and for i = 21 we had a total of 1836311858 topplings before a stable configuration was obtained. Furthermore, when inverting corresponding toppling matrices using Matlab, numerical singularities were encountered. This is indicative of the chaotic behaviour when sources are present, just as we encountered mathematically.

52 9 CRITICAL AVALANCHE DATA ANALYSIS

9 Critical avalanche data analysis

In this section, we present the methods that were used to estimate the critical avalanche exponent τ governing the BTW model behaviour in the critical state. We also provide some results to illustrate our method. Also, we present some results for the uniformly dissipative case to show that it does not follow power-law behaviour. Firstly, we present the method followed by BTW [1]. Thereafter, we present a more statistical approach based on a number of goodness of fit tests. From BTW [1] and Golyk [19], we know that the avalanche size distribution stochastic variable S follows power-law behaviour

−τ fS(s) ∼ s (113) where we have again defined the avalanche size S as the total number of affected sites in the avalanche dynamics. There is some ambiguity in some papers about the definition of avalanche sizes, but we have chosen to only analyze this type of avalanche sizes profoundly. The data associated to the other characterization, which defines the avalanche size as the total number of toppling events, have also been generated by our simulations but are not analyzed in a statistical manner. These results can be found in the next section and the appendices.

9.1 The BTW approach This approach to calculating the dynamical critical avalanche exponent τ is very straightforward. After simulating the BTW model in two dimension, a number of realizations of the stochastic variable S are obtained, which we denote by {si, 1 ≤ i ≤ N}. These are filtered to exclude the avalanches generated in the build-up phase of the dynamics. Also, zero-size avalanches are excluded (technically these are not even avalanches, but our simulations include them anyway). Numerically, it is found that the critical phase of the model is reached after addition of approximately 2.125L2 particles, which agrees well with the results of previous numerical simulations. Dividing the raw avalanche size data in equal bins of length one by setting wj = #{si : si = j} j = 1, ..., max{si, 1 ≤ i ≤ N} (114) we plot the empirical density function or histogram wj against j to obtain the following figure.

L = 80, N=10.000.000, classical 10 0 f (s) ~ s -τ, τ ≈ 0.9912 S 10 -1

10 -2

10 -3 (s) S f 10 -4

10 -5

10 -6

10 -7 10 0 10 1 10 2 10 3 10 4 Avalanche size s

Figure 19: A log-log plot of the empirical density function wj against j. The parameters of this simulation are L = 80, N = 107, the original BTW model with closed boundary conditions. The red line represents the best power-law fit, with exponent τ(80) ≈ 0.9912.

53 9 CRITICAL AVALANCHE DATA ANALYSIS

Clearly, a straight line can be observed in the regime of small avalanche sizes. A standard Matlab curve fitting tool has been used to estimate the critical exponent τ: τ ≈ 0.9912 with 68% confidence region (0.9902, 0.9922). The mean avalanche size was found to be 357, although this is not a good summary of the data at hand due to the power-law nature of avalanche sizes. The exponents for other system sizes are given in the following table:

Table 5: A table summarizing the critical exponentsτ ˆ(L) for the classical BTW model and their 68% confidence intervals. These data have been obtained by fitting the avalanche size data to a power law using Matlab.

L → 10 20 40 80 160 320 640 τˆ(L) 0.941 0.953 0.974 0.9912 1.005 1.014 1.018 68% confidence interval ±0.031 ±0.026 ±0.024 ±0.022 ±0.020 ±0.019 ±0.018

Finite-size and discreteness effects The extreme values of the simulations are left out in the analysis of exponents. In practice, this meant that avalanches above size L2/2 were left out in our analysis of critical exponents. It can be seen that for small L the finite-size effects play a large role in the avalanche size distribution. Indeed, when the sample size (the number of added particles) is large, one observes an almost ‘continuous’ line in the power-law plot such as in the case of Figure 20 where L = 10. However, in the case of L = 640 one observes wildly fluctuating tails because the total number of particles is too small to sample the large-avalanche area sufficiently.

L = 640, N=5.000.000, classical L = 10, N=10.000.000, classical 0 10 0 10 f (s) ~ s -τ, τ ≈ 0.941 S -1 10 -1 10 f (s) ~ s -τ, τ ≈ 1.018 S

10 -2 10 -2

10 -3 10 -3 (s) (s) S S f f 10 -4 10 -4

10 -5 10 -5

10 -6 10 -6

-7 10 10 -7 0 1 2 10 10 10 10 0 10 1 10 2 10 3 10 4 10 5 10 6 Avalanche size s Avalanche size s

Figure 20: A log-log plot of the empirical density function wj against j. The parameters of this simula- tion depicted left are L = 10, N = 107, the original BTW model with closed boundary conditions. The parameters of this simulation depicted left are L = 640, N = 107. Finite-size effects and undersampling effects are clearly visible.

Estimating the infinite-volume limit critical exponent τ∞ A similar analysis has been done to other system sizes: To compute the critical exponent τ, we have done simulations on lattices with L = 10, 20, 40, 80, 160, 320, 640. This way, the system size was quadrupled every time in order to test the hypothesis of finite-size scaling as discussed before. Denoting our ‘real’ critical exponent by τ = τ∞ = limL→∞ τ(L). It was conjectured by Manna [4] that under finite size scaling, the critical exponents of the classical BTW model follow the relation:

C τ(L) = τ − (115) ∞ ln(L)

54 9 CRITICAL AVALANCHE DATA ANALYSIS

where C is a constant. This conjecture has been tested by plotting our finite-size critical exponents 1 against the dimensionless quantity which can be seen in the following figure: ln(L)

τ Estimating ∞ 1.2 τ(L) ≈ -0.3281/ln(L)+1.068

1.15

1.1

1.05 (L) τ

1

0.95

0.9 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 1/ln(L)

Figure 21: Determination of τ∞ using the extrapolation proposed in (115). One can see that τ∞ ≈ 1.068 ± 0.035.

Since the critical exponents more or less form a straight line, in the plot of the exponents versus 1/ln(L), we conclude that Manna’s conjecture of finite size scaling of the exponents holds. Extrapolating by letting L → ∞, we obtain our final estimation of the critical exponent τ = τ(∞) ≈ 1.068 with a relatively large uncertainty of ±0.035 due to the extrapolation procedure and initial uncertainties on the exponents. This is mainly due to the large distance of the measured values from the vertical axis. Thus it is in principle not possible to obtain the exponents of the BTW model with high accuracy by a simple extrapolation of the exponents via Equation (115). The individual exponent uncertainties can be decreased by increasing the total avalanche sample size, although discreteness and finite-size effects give a lower bound on the uncertainties. Also, many identical simulations can be done and thereafter the mean of realisations of τ(L) can be taken as estimate for the critical exponent, decreasing the uncertainty in τ(L).

9.2 Likelihood estimation of τ

The second approach that we are considering is roughly described by Virkar and Clauset [22]. They introduced their method in the continuous case of power-law distributions, but since the avalanche size stochastic variable can only take discrete values, we have to slightly modify the procedure, which has been described in [23]. In fact, the general expression for a discrete power-law distribution is given by

1 P (S = s) = s−τ , s ∈ (116) ζ(τ) N

55 9 CRITICAL AVALANCHE DATA ANALYSIS

where τ > 1 and ζ(τ) is the Riemann zeta function. ∞ X n ζ(τ) = , Re(τ) > 1 τ n=1 Note that the requirement τ > 1 for real τ is necessary to guarantee the convergence of the Riemann zeta function. This immediately puts a lower bound on the critical exponent τ. We now put an upper bound on our avalanche size data si to exclude finite-size effects, and denote the number of data left after filtering as M. Assuming that (116) is the underlying density of our avalanche size variable S, we calculate the likelihood function of τ given the data realizations si. Note that under mild regularity conditions, if the data are independent, identically-distributed draws from our power-law distribution with parameter τ (116), it is a well-known result that when M → ∞, our maximum likelihood estimate of τ, denoted byτ ˆ converges to the real critical exponent τ almost surely.

M Y L(τ|{si}) = P (S = si) i=1 1 = M QM −τ ζ (τ) i=1 si 1 = −τ M QM  ζ (τ) i=1 si which gives as our log-likelihood function

l(τ|{si}) = log (L(τ|{si})) M X = −M log (ζ(τ)) − τ log(si) i=1 Now we take the derivative of the log-likelihood with respect to τ to find

M ∂ ζ0(τ) X l(τ|{s }) = −M − log (s ) (117) ∂τ i ζ(τ) i i=1 Now, since the Riemann zeta function converges uniformly and so does its derivative, the derivative of ζ(τ) is simply given by ∞ X log (n) ζ0(τ) = − (118) nτ n=2 So, by setting the left hand side of (117) equal to zero, we obtain for our maximum likelihood estimate for τ, which we denote byτ ˆ: P∞ log (n) M n=2 τˆ 1 X n = log (s ) (119) P∞ 1 M i n=1 nτˆ i=1 Note thatτ ˆ is not a stochastic variable, it is a function of realizations of S. Unfortunately, a closed form expression forτ ˆ is very difficult to obtain. We must therefore continue numerically to solve (119) forτ ˆ. We approximate ζ(τ) and ζ0(τ) using a finite number of terms and numerically solve for τ using Matlab. This way, we obtained the following estimates for τ for different values of L.

Table 6: A table summarizing the maximum likelihood estimates of critical exponentsτ ˆ(L) for the classical BTW model, assuming (116) as underlying density.

L → 10 20 40 80 160 320 640 τˆ(L) 1.441 1.348 1.286 1.248 1.223 1.206 1.195

56 9 CRITICAL AVALANCHE DATA ANALYSIS

It is remarkable that these exponents are dramatically different than the ones obtained by the BTW method of calculating the critical exponents. On the other hand, Kolmogorov-Smirnov testing gave a very bad fit when the critical exponent estimates were fitted to the distribution (116), due to the heavy-tailed assumed distribution. On finite system sizes, such a heavy tail is not realizable.

Kolmogorov-Smirnov test statistic We now apply the Kolmogorov-Smirnov test statistic to deter- mine if the estimated power-law exponent is a good fit. In [24], we observe that the Kolmogorov-Smirnov test statistic for power-law distributions can be formulated as

Ms Demp = max | − Sest(s;τ ˆ)| (120) s∈N M

where Sest(s;τ ˆ) denotes the survival function of avalanche sizes, defined by

∞ X Sest(s;τ ˆ) = P (S ≥ s) = P (S = n) n=s and Ms = #{si : si ≥ s}. If Demp is small, it follows that our fit is good because the predicted survival function and our empirical survival function do not differ too much. We therefore need a method to determine if Demp is small or large, which will be bootstrapping data from the power-law distribution (113) with τ =τ ˆ(L) for every L. We then compare the bootstrapped data to the actual data and investigate the reliability of our estimate ofτ ˆ(L). A plot of the theoretical survival function and the empirical survival function is given in Figure 24.

Empirical/Fitted survival function 1 M /M 0.9 s S (s; τ) est D 0.8 emp

0.7

0.6

0.5

Survival 0.4

0.3

0.2

0.1

0 500 1000 1500 2000 2500 3000 s

Figure 22: The empirical survival function Ms/M, the fitted survival function Sest(s;τ ˆ) and the Kolmogorov-Smirnov statistic Demp plotted in one figure. It is assumed that the data are i.i.d. real- isations drawn from distribution (116). These data are obtained by simulating the BTW model on a 80 × 80 lattice using 107 particles.

57 9 CRITICAL AVALANCHE DATA ANALYSIS

Testing goodness of fit: bootstrapping To see if our estimated avalanche data exponentτ ˆ(L) fits the data well, we must construct datasets according to the distribution (113) for different values of τˆ(L). Normally we would use the transformation method, which says that if U ∼ U[0, 1] uniform on [0, 1 and we wish to generate data sampled from a cumulative distribution function FX , we have

−1 X = FX (U)

has cumulative distribution function FX . This is rather cumbersome in our case of a discrete power law. However since the range {1, ..., L2} of S is reasonably large, we can approximate the discrete power-law by a continuous one. Thereafter we round our samples to nearest integers to obtain a set of bootstrapped random numbers distributed according to (113). For a continuous power law, the transformation can conveniently be written as

−1/(τ−1) X = xmin(1 − U) (121)

where U ∼ U[0, 1]. Because a power-law is not integrable on [0, ∞), we have to define a lower bound xmin for which it is integrable. Now, X follows a power-law distribution defined on [xmin, ∞). Since our avalanche sizes are discrete, we must ‘round the density’ fX to nearest integers. This is done by firstly subtracting 1/2 from X and then rounding off to nearest integer by letting

 1 1 X = x − )(1 − U)−1/(τ−1) + (122) discr min 2 2

We can now generate many numbers according to (122). Thereafter, we perform a maximum likelihood estimate of the exponent τ in (122), which we callτ ˆsim. We calculate the Kolmogorov-Smirnov statistic of these data using (120), which we denote by Dsim. This process of bootstrapping is repeated around j a 100 times to generate 100 realisations of Dsim, which are denoted by dsim, 1 ≤ j ≥ 100. Now, the p-value of our estimation can be written as 1 p = #{d > D } (123) 100 sim emp

If our assumption about the underlying density of avalanche sizes is correct, Demp will be small. There- fore, a low p-value is an indication of a bad fit, whereas a large p-value indicates a good fit. Unfortu- nately, our simulations always yielded bad p-values of around zero: p ∈ [0, 0.01] for all L. This is most certainly due to the cut-off avalanche size smax. We conclude that (116) is a poor representation of the avalanche size data that were obtained.

Power-law cut-off The power laws described in the present section are ideal: The continuous power 1 law has interval [ 2 , ∞) while the discrete power law (116) is a probability distribution on N. In our simulations however, we have defined a certain cut-off avalanche size smax for which finite-size effects come into play. This means that in reality we omit a large part of our distribution (116). Since the distribution is heavy-tailed, this may influence the maximum likelihood estimate significantly. Clauset et al. comment that power-laws with cut-offs are difficult to analyse and they haven’t provided a way to fit these distributions.

9.3 Truncated power law MLE estimation

However, let us sketch an approach that may yield reasonable results. The cut-off avalanche size smax 1 is chosen to be s = L2 which seems to be a natural candidate. Investigating BTW plots such as 19 max 2 mostly yields aberrant behaviour at approximately this value. However, more thorough investigation to optimally use the data available is recommended. We now assume that the avalanche size data are

58 9 CRITICAL AVALANCHE DATA ANALYSIS

independent, identically distributed realizations of a stochastic variable with the density

1 −τ gS(s) = P (S = s) = s , s ∈ {1, ..., smax} (124) ξ(τ, smax) This ‘truncated power law’ is normalised by defining the ξ function as s Xmax 1 ξ(τ, s ) = (125) max kτ k=1

and it is easy to see that ξ(τ, smax) → ζ(τ) if smax → ∞. With smax as our cut-off parameter, we can perform the same derivations as for the infinite-domain power law to obtain our log-likelihood function

M X l(τ|si) = −M log (ξ(τ, smax)) − τ log(si) (126) i=1 where M = #{si : 1 ≤ si ≤ smax} denotes the total number of reliable avalanche size data. Maximizing the log-likelihood gives our estimate for τ which we denote byτ ˆg with the subscript g indicating the assumed distribution (124). We thereafter perform the Kolmogorov-Smirnov goodness of fit test to obtain indications of goodness of fit. Just as in the case of (122) we generate about a 100 large-size datasets of realizations from (124). However, since gS(s) = 0 for s > stau, we reject samples larger than smax to obtain proper bootstrapped datasets. We then calculate the Kolmogorov-Smirnov test statistics of the simulated datasets and new p-values are obtained. The results of fitting the avalanche size data to (124) are shown in the table below, with p-values included.

Empirical/Fitted survival function 1 M /M 0.9 s S (s; τ) est D 0.8 emp

0.7

0.6

0.5

Survival 0.4

0.3

0.2

0.1

0 5 10 15 20 25 30 35 40 45 s

Figure 23: The empirical survival function Ms/M, the fitted survival function Sest(s;τ ˆ) and the Kolmogorov-Smirnov statistic Demp plotted in one figure. It is assumed that the data are i.i.d. reali- sations drawn from distribution (124). These data are obtained by simulating the ASM on a 10 × 10 lattice using 107 particles. One can see that this fit is much better than the fit to (116), which results in a much smaller Demp. We conclude that (124) fits the data better than (116).

59 9 CRITICAL AVALANCHE DATA ANALYSIS

The figure below depicts the distribution of dsim and the realization of Demp

Distribution of d with D ; p = 0.84 sim emp 15 d sim D emp

10

5

0 0.01 0.015 0.02 0.025 0.03 0.035 0.04 0.045 D , d emp sim

Figure 24: The distribution of 100 realisations of dsim by bootstrapping 100 datasets according to (124). The value of Demp has also been plotted. The analysis yields a p-value of 0.84, indicating a good fit. However, see the discussion below.

The phenomenon of ‘p-hunting’ is a trap to many scientists not familiar with statistics. One obtains a high p-value, but what does it say? We have also retrieved high p-values by fitting the distribution (124) to our data. However, these large p-values were not the result of a good fit, but rather the result of poor bootstrapping. Far too small bootstrapped datasets were fitted to (124), thereby yielding bad Kolmogorov-Smirnov statistics. Of course we then obtain a high p-value! This anecdote makes it clear that thoughtful statistical analysis is very important.

9.4 Our recommendation In their paper, Clauset and Virkar [22] list several strategies to model data such as avalanche size data. The one that we think is the best is given by the density

λ1−τ h (s) = s−τ e−λs (127) S Γ(1 − τ, λ)

which they refer to as a power-law with exponential cutoff. Unfortunately we haven’t had the time and knowledge to fit this distribution properly to the data. Nevertheless, we think this is an exceptional candidate for fitting the data to, because it is a midway between the proposed distributions fS and gS that we have analysed in the present section. Some difficulties concerning hS include how to estimate λ and τ, and how to generate samples according to hS for bootstrapping. When these problems are overcome, it could give good results.

60 10 SELF-ORGANIZED CRITICALITY

10 Self-Organized Criticality

This section contains a brief discussion on the phenomenon of self-organized criticality (SOC). Since the introduction of this concept in the BTW paper [1], phenomena of strikingly different backgrounds were claimed to exhibit SOC behaviour. These of course include sandpiles, earthquakes, forest fires, rivers, mountains, cities, literary texts, electric break-down, motion of magnetic flux lines in superconductors, water droplets on surfaces, dynamics of magnetic domains, growing surfaces, human brains, etc. . Per Bak himself also wrote a book titled ‘How Nature Works: The Science of Self-Organized Criticality’, in which he discusses a multitude of physical and non-physical phenomena that can be attributed to this underlying mechanism of nature. Bak frequently argues that his approach is radically different from the reductionistic approaches undertaken in classical mechanics. However, the concept of SOC has also received criticism from notable scientist, who argue that the emergence of power-law behaviour does not necessarily mean that a system naturally evolves to a complex state. In general, an important

class of SOC phenomena is constituted by out-of-equilibrium systems driven at a slow and steady rate, possessing the following properties [15]:

• Very slow driving rate. In the BTW model, particles are added slowly and steadily while the stabilization occurs rapidly;

• Highly nonlinear behaviour: essentially a threshold input response. In the BTW model, this is certainly the case as is demonstrated by large avalanche sized due to the addition of a single particle;

• A globally stationary regime, characterized by global properties. Again, in the BTW model, this corresponds to the power-law exponents and the total mass that characterize the critical, stationary state of the model;

• Power distributions of event sizes and fractal geometrical properties, including long-range corre- lations.

There are some other mechanisms that contribute to the formation of SOC, such as conservation laws in the interior of a system and feedback of the order parameter to the control parameter. For example, in the Ising model we have the temperature control parameter T and for instance the order parameter m that denotes magnetization. T can be controlled externally, while m is a result of the system itself. In case of a feedback mechanism existing from m to T , it could well be the case that a nontrivial critical point is reached naturally, indicating SOC behaviour. Of course, in the Ising model no such feedback mechanism exists, but it may be an underlying mechanism in SOC models.

Examples Of course, a real-world sandpile is a natural candidate for exhibiting self-organized critical behaviour. One approach is to consider the classical BTW model and interpret the heights η(x) as local slopes. This way, a realistic sandpile is formed that exhibits a critical slope. By adding more and more particles, the slope will increase and eventually exceed the critical slope value. At this point, avalanches bring the slope back to approximately its critical value. In the past, there have been real- world experiments on these types of sandpiles. However, it is difficult to measure avalanche sizes. One could consider measuring the mass of the pile in time, whereby decreasing mass indicates particles leaving the boundary at an avalanche. However, it was found by our numerical simulations that the mass falling off the boundary of the finite-size BTW model does not follow power-law behaviour but exponential, which has also been found by experiments with real sandpiles. One could also place a camera or motion detector above the sandpile that records avalanches in time. Another factor however is formed by inertial effects in the avalanche dynamics when using sand. Therefore, rice has been tested

61 10 SELF-ORGANIZED CRITICALITY

and it has been shown that long-grained rice shows behaviour most agreeing with the original BTW model, although not exactly.

Figure 25: An experimental setup to analyse sandpile dynamics.

A notable case of assumed self-organized criticality occurs in the earth’s crust, and has often been proposed as a real-world paradigm of SOC. The four properties listed above are all present. The slow driving rate is present in the slow movement of tectonic plates. Eventually, the ‘pressure threshold’ is exceeded and an earthquake occurs, which can be viewed as highly nonlinear behaviour by releasing enormous quantities of energy at a very short time. These two time scales characterize the SOC nature of the earth’s crust. Moreover, it has been described in Bak’s book How Nature Works [2], that earthquake magnitudes follow a power law over a wide range of magnitudes. This is known as the Gutenberg-Richter law. There are lots of other phenomena that exhibit power-law behaviour. Collectively, these appear to obey Zipf’s law, which is a power-law that applies to skyscraper height distributions or distribution of words in a novel.

DLA To illustrate a different example of self-organized criticality, we have looked at a phenomenon called Diffusion Limited Aggregation (DLA). It is a example of fractal forming in nature, apparently without detailed fine-tuning of an external parameter as for example in the Ising model. DLA exhibits self-similar properties and scale invariance and is thought to be an easy principle underlying the for- mation of lightning, frost, coral and mineral crystallinity. The simplest model of DLA was invented in 1981 by Witten and Sander and goes as follows. A initial structure of particles is formed. The algorithm then releases a random walker from a random starting point far from the initial structure. If the random walk comes sufficiently near the already existing structure (a pre-defined distance r), it aggregates and a particle is added to the structure at that location.

62 10 SELF-ORGANIZED CRITICALITY

Figure 26: A realisation of our simulation DLA.f90, which generates a cluster of aggregated particles

The fractal-like structure arises from the competition of two mechanisms. Since particles are con- stantly aggregating to the cluster, it has a natural tendency to grow bigger evenly. On the other hand, a large branch can shield particles from aggregating by growing bigger itself. Particles are more likely to stick to outer perimeter sites than to interior sites, so a region surrounded by aggregated particles will most likely remain empty. Although no external fine-tuning is required to achieve self-similarity and fractal structure, one can incorporate various parameters to influence average mass and shape of the resulting DLA cluster. For example, a ‘stickyness’ parameter µ ∈ [0, 1] may be introduced that gives the probability of a randomly walking particle sticking to the structure when less than r apart from it. A large µ results in a relatively low-density cluster while a low µ causes particles to have the ability to walk towards the interior of the cluster, causing heavier clusters.

63 11 CONCLUSIONS, DISCUSSION, RECOMMENDATIONS AND PERSONAL NOTES

11 Conclusions, Discussion, Recommendations and personal notes

We summarize our mains results in this section. They are discussed and we will give some recommen- dations for further research.

Stabilizability Firstly, we have analysed micro-systems in order to find some characterizations of stabilizability in the sink/source Abelian Sandpile model. We could not prove any characterization, but it is certain that sink/source systems show rather chaotic behaviour which depends heavily on micro- configurations of sources and sinks. One conjecture about one-dimensional sink/source systems could be made: a system with absorbing boundary conditions is stabilizable if and only if the spectrum of ∆Dn,Fn is positive. If zero is an eigenvalue of ∆Dn,Fn , we conjecture that the system is metastabilizable. Finally, if a negative eigenvalue is present, we conjecture that the system is unstabilizable. We recom- mend testing this hypothesis on systems in more dimensions to obtain a generalised characterisation of stabilizability.

Mathematical analysis of the ASM Starting with the basic definitions and theorems, we have introduced stochastic processes, Markov processes, their semigroups and generators. The ASM has been introduced in this context, and it has been shown that the general sink/source semigroup of the Markov chain of ASM configurations converges to the classical semigroup in case of stabilizability. Using the invariant measure on recurrent configurations, Dhar’s formula has been proven for the sink/source ASM. A variant of the Feynman-Kac formula for stochastic processes has been proven, which enabled d us to obtain a characterization of avalanche sizes and non-criticality in the infinite-volume ASM on Z . d Indeed, the avalanche propagation can be seen as a random walk in Z . Non-criticality in the ASM has been defined as the expected avalanche size being finite. A sink/source potential is thereby linked to the survival time of a killed random walk in a field of traps and birth sites. If the expected survival time of such a random walk is finite, the system is non-critical.

Due to poor planning, I haven’t had enough time to analyse the random walk extensively. It turns out to be rather difficult to derive bounds on criticality in the infinite-volume limit. However, we have been able to derive a few theorems.

• If no source sites and only finitely many conservative sites are present, the model is not critical.

• This is essentially a generalization of the first theorem: If a uniform estimate of distances to dissipative sites can be made, the model is not critical.

• A stabilizable system consisting of only dissipative sites with only finitely many source sites is not critical under the assumption that the local critical height is very large.

We recommend relating the ASM to the Parabolic Anderson model or the pinning model. These models are extensively studied in statistical physics and may provide insights in the behaviour of the ASM in the d-dimensional infinite volume limit. Also, we have assumed well-definedness of the infinite-volume limit, but in some dimensions this is still an open question. Further research is required.

Renormalisation approach to the BTW model The BTW model has been analysed using the method of renormalisation. Thereby we were able to derive some critical exponents governing the avalanche dynamics in two dimensions. We have focussed on one exponent since the avalanche area is the quantity we have also investigated in both the mathematical and numerical part of this thesis.

64 11 CONCLUSIONS, DISCUSSION, RECOMMENDATIONS AND PERSONAL NOTES

−τ It is found that fS(s) ∼ s with τ ≈ 1.253. We have also introduced dissipation in the renormali- sation equations, and we have good reason to think that self-organized critical behaviour is lost when a uniform density of dissipation is applied. Unfortunately we were not able to prove this due to the extremely complex form of the renormalisation transformation. We recommend a different approach to renormalisation of the BTW model. Similar method like ours have been proposed and also a simplified renormalisation transformation has been tried by other sci- entists but it failed to provide useful results. We therefore recommend investigating the possibilities of formulating a general framework of SOC to analyse the behaviour of completely attractive fixed points. It could be the case that SOC forms a particular subclass of critical phenomena that cannot be completely analysed using the present methods.

Numerical simulation The BTW model has also been simulated numerically using Fortran 95. The effects of sources and sinks were analysed with the main result that source sites are very unpredictable and tend to blow up the system in an unpredictable way. We simulated the BTW model on differently sized lattices and performed a finite-scaling ansatz to retrieve the infinite-volume critical exponent τ∞. We have also performed a maximum likelihood analysis to estimate the critical exponent τ for different system sizes. A pure discrete power-law approach yields unreliable results due to finite-size effects. Therefore, a discrete distribution has been proposed where power-law behaviour occurs in a limited range. This approach also yielded a poor fit. The one that we think is the best is given by the density λ1−τ h (s) = s−τ e−λs (128) S Γ(1 − τ, λ) which they refer to as a power-law with exponential cutoff. Unfortunately we haven’t had the time and knowledge to fit this distribution properly to the data. Nevertheless, we think this is an exceptional candidate for fitting the data to. The distribution of avalanche sizes in the uniformly dissipative case has not been retrieved. At first, an exponential distribution has been assumed but this turned out to not be the case. More likelihood-ratio tests assuming various underlying distributions are necessary to perform to retrieve the distribution of avalanche sizes in the uniformly dissipative case. The distribution of avalanche sizes with mixed sink/source systems have not been analysed due to the chaotic behaviour of sources and the highly local character of avalanche sizes: An area containing relatively many sources may generate lots of particles during a single avalanche, increasing the risk of unstabilizability. We recommend using a faster computer to simulate the BTW model. The simulation could be made faster by implementing a more efficient way of saving the height configurations at every time iteration: instead of saving a L×L matrix at each iteration one could save a L2×1 vector containing heights, which is faster in Fortran. Also, a more efficient and all-round plotting package could be used to visualize avalanche dynamics more vividly. Instead of overwriting the height configuration at every iteration of plotting, one could implement a way to only plot the height differences between iterations and leave the rest intact. The data analysis we have performed could also be improved. Simulating on larger systems and a larger particle number both increases the range of power-law behaviour and improves the likelihood estimate of critical exponents. We weren’t able to find theory on power-laws with cut-off, so more research could be done in that field. Since tail probabilities are especially important in power-law distributions, this is a major difficulty in analysing BTW model data.

We have included a short discussion on the SOC paradigm introduced by Per Bak. Bak himself regarded SOC to be a universal mechanism underlying various physical phenomena. Although we agree the idea is well-thought out, we do not agree with his conclusion. SOC can at most be seen as a class of simplified, theoretical models. The emergence of power-law behaviour on a limited scale does not necessarily mean that SOC occurs. Moreover, the main problem about the SOC paradigm is that it is poorly defined.

65 REFERENCES REFERENCES

References

[1] Per Bak, Chao Tang and Kurt Wiesenfeld, Self-organized criticality, Physical Review A, July 1, 1988.

[2] Per Bak, How Nature Works: The Science of Self-Organized Criticality, New York, NY, USA: Copernicus, 1996, print.

[3] Deepak Dhar, The Abelian Sandpile and Related Models, arXiv:cond-mat/9808047 October 22, 1998.

[4] S.S. Manna, Large-Scale Simulation of Avalanche Cluster Distribution in Sand Pile Model, Journal of Statistical Physics, Vol. 59, Nos. 1/2, 1990

[5] Frank Redig, Mathematical aspects of the abelian sandpile model, Lecture Notes of Les Houches Summer School 2005, Mathematical Statistical Physics, Session LXXXIII, June 21, 2005.

[6] Antal A. Jrai, Frank Redig and Ellen Saada, Approaching Criticality via the Zero Dissipation Limitin the Abelian Avalanche Model, J. Stat. Phys. (2015) 159:13691407.

[7] Anne Fey, Ronald Meester and Frank Redig, Stabilizability and Percolation in the infinite volume Sandpile Model, The Annals of Probability, Vol. 37, No. 2, 2009.

[8] Rogers, L.C.G. and Williams, D. (1997), Diffusions, Markov Processes, and Martingales, 2nd Ed. New York, NY, John Wiley and Sons, print.

[9] Bakry, D., Gentil, I., Ledoux, M. (2014), Analysis and Geometry of Markov Diffusion Operators New York, NY, Springer

[10] Anne Fey and Frank Redig, Organized versus self-organized criticality in the abelian sandpile model, http://www.eurandom.tue.nl/reports/2005/031-report.pdf September 6, 2005.

[11] F. Redig and E. Saada, Non-criticality of the abelian sandpile model on a random tree and related models, work in progress.

[12] F. den Hollander, J. Naudts, and F. Redig, Long-Time Tails in a Random Diffusion Model Journal of Statistical Physics, Vol. 69, Nos. 3/4, 1992

[13] G. Giacomin, (2011) Disorder and critical phenomena through basic probability models, Springer lecture notes in mathematics 2025, Springer

[14] Jos Thijssen, Lecture notes on Statistical Mechanics, Course AP3021G, 2014.

[15] Sornette, D. (2000), Critical Phenomena in Natural Sciences: Chaos, Fractals, Selforganization and Disorder: Concepts and Tools Berlin, Springer Verlag Heidelberg Berlin, print.

[16] Alessandro Vespignani, Stefano Zapperi, and Luciano Pietronero, approach to the self-organized critical behavior of sandpile models Physical Review E, Vol. 51, No. 3, 1995.

[17] S.S. Manna, Two-state model of self-organized criticality J. Phys. A: Math. Gen. 24 (1991)

[18] ICCP/coding notes, https://github.com/ICCP/coding-notes, retrieved November 2015.

[19] Vladyslav A. Golyk, Self-organized criticality, Massachusetts Institute of Technology, Department of Physics, Cambridge, Massachusetts 02139, USA

[20] P. Grassberger, S.S. Manna, Some more sandpiles Journal de Physique, 1990, 51 (11), pp.1077- 1098. 10.1051/jphys:0199000510110107700 jpa-00212432

66 REFERENCES REFERENCES

[21] S. L¨ubeck and K. D. Usadel, Numerical Determination of the Avalanche Exponents of the Bak- Tang-Wiesenfeld Model Phys. Rev. E 55, 4095 (1997)

[22] Yogesh Virkar and Aaron Clauset, Power-law distributions in binned empirical data The Annals of Applied Statistics, 2014, Vol. 8, No. 1, 89119

[23] Alvaro Corral, Anna Deluca, and Ramon Ferrer-i-Cancho A practical recipe to fit discrete power- law distributions, arXiv:1209.1270v1 September 6, 2012

[24] Press WH, Teukolsky SA, Vetterling WT, Flannery BP (1992) Numerical Recipes in FORTRAN Cambridge University Press, Cambridge, 2nd edition.

[25] Devroye, L., Non-Uniform Random Variate Generation, Springer-Verlag, New York, 1986

67 A PROJECT DESCRIPTION: DISSIPATION IN THE ABELIAN SANDPILE MODEL

Appendices

A Project Description: Dissipation in the Abelian Sandpile Model

Author: Henk Jongbloed Supervisors: prof. dr. F.H.J. Redig (EWI) and dr. J.M. Thijssen (TN) Bachelor Project Applied Mathematics and Applied Physics, Delft University of Technology

The Abelian Sandpile Model is a stochastic model with interesting mathematical and physical properties, first described by Bak, Tang and Wiesenfeld [1]. They also introduced the concept of self-organizing criticality in this model, which they defined as a critical state in the model, with no characteristic temporal or spatial scales. Following their article, the model has been studied intensively in both physics literature, for example [3]; and in mathematics literature, for example [5]. Since the paper by Bak, Tang and Wiesenfeld, the abelian sandpile model has been studied on finite and infinite lattices and considering effects of dissipation, as well as various other modifications of the original model. We will study variations of the BTW model with sinks (dissipative vertices where mass disappears) and sources (vertices where mass is added). Analytically we will focus on the question above which level of dissipation criticality is lost. The model with sources and sinks will be analysed first numerically, then if possible we will mathematically study the question whether equilibration of sources and sinks implies that the resulting model is still critical. It is known that without dissipation in the interior of the lattice, self-organizing criticality occurs. Will the criticality hold under various distributions of dissipation? Some bounds are known, but not exactly. How does the critical behaviour scale with the system size, in particular, is it possible to infer exponents from finite size scaling?

68 B CODE

B Code

The Fortran script below was used to generate the avalanche size data. !ASMeff. f90 program ASMeff ! 2DBTW model with fixed sources/sinks: mathematical implicit none integer, parameter :: L = 80, N = 10000000!, hmax = 3 integer :: eta(L,L), distr(L,L), topnum(L,L) !logical :: per = .false. real(8) :: fn,fd,fs eta=0 topnum = 0 fn =1.0 8 !system parameters: density of normal/dissipative/source sites fd =0.0 8 f s =0.0 8 c a l l i n i t distr(distr ,L,fn,fd,fs)

c a l l i n i t f i g (L) call execute(eta ,distr ,topnum,L,N)! ,per) c o n t a i n s subroutine execute(eta, distr , topnum, L, N)!, per) integer :: L, N, k, p, q !logical :: per integer :: eta(L,L), distr(L,L), topnum(L,L), topnum before(L,L), topsites(L,L)! , bound(L,L) integer :: avalanche size(N,3)! , totalmass(N) real :: start, finish ! c a l l init bound(bound,L) c a l l cpu time(start) do k = 1 , N t o p s i t e s = 0 c a l l a x(eta, L, p, q) ! i f ( k>3∗L∗∗2) then ! c a l l p l o t e t a ( eta , L) !call sleep(1) ! end i f topnum before = topnum call stabilize(eta, distr, topnum, L, p, q,k)!, per) a v a l a n c h e size(k,1) = sum(topnum−topnum before) !Total avalanche size where (topnum−topnum before > 0) topsites = 1 a v a l a n c h e size(k,2) = sum(topsites) !Affected sites by avalanche a v a l a n c h e size(k,3) = sum(topnum(1:L,1) − topnum before (1:L,1))+sum(topnum(1:L,L)−topnum before(1:L,L))& +sum(topnum(1 ,1:L)−topnum before (1 ,1:L))+sum(topnum(L,1:L)−topnum before(L,1:L)) !Boundary sites !totalmass(k) = sum(eta)

69 B CODE

if (mod(k,N/50) == 0) then c a l l p l o t e t a ( eta , L) ! p r i n t ∗ , k !call sleep(1) end i f end do !call sleep(1) ! c a l l p l o t eta (topnum,L) c a l l cpu time(finish) ! p r i n t ∗ , a v a l a n c h e s i z e print ’(”Lattice Dimension = ”,i4.1,” x”,i4.1,” units.”)’, L, L print ’(”System Parameters: ”,f9.3,”,”,f9.3,”,”, f9.3)’, fn,fd,fs print ’(”Number of Iterations = ”,i13.1)’, N print ’(”Running Time = ”,f9.3,” seconds.”)’,finish −s t a r t print ’(”Max Avalanche Size = ”,i9.1,” grains.”)’, maxval(avalanche size(1:N,1)) print ’(”Max Number Affected Sites = ”,i9.1,” sites.”)’, maxval(avalanche size(1:N,2)) print ’(”Max number grains fallen off table = ”,i9.1,” grains.”)’, maxval(avalanche size(1:N,3)) ! open(unit = 10, status=’replace’, file=’80 36 64 . txt ’ ) ! wr ite ( 1 0 , ∗ ) a v a l a n c h e s i z e ! c l o s e (10) ! open(unit = 10, status=’replace’, file=’tm320D.txt’) ! wr ite ( 1 0 , ∗ ) totalmass ! c l o s e (10) ! open(unit = 10, status=’replace’, file=’frac300.txt’) ! wr ite ( 1 0 , ∗ ) eta ! c l o s e (10) ! open(unit = 10, status=’replace’, file=’fractn300.txt’) ! wr ite ( 1 0 , ∗ ) topnum ! c l o s e (10) ! c a l l p l o t e t a ( eta , L) call endplot() end subroutine execute

subroutine a x(eta, L, p, q) integer :: p, q, L, ipos(2) real(8) :: pos(2) integer :: eta(L,L) c a l l random number(pos) !pos(1:2) = (/0.5, 0.5/) ipos = int(pos ∗L) + 1 eta(ipos(1),ipos(2)) = eta(ipos(1),ipos(2)) + 1 p = i p o s ( 1 ) q = i p o s ( 2 ) end subroutine a x

subroutine toppleper(eta, distr ,topnum, L, i, j) !Boundaries: Periodic integer :: L, i, j, im1, ip1, jm1, jp1 integer :: eta(L,L), distr(L,L), topnum(L,L) ip1 = modulo(i ,L)+1 im1 = modulo(i −2,L) + 1 !Correct!

70 B CODE

jp1 = modulo(j ,L)+1 jm1 = modulo(j −2,L) + 1 eta(im1,j) = eta(im1,j) + 1 eta(ip1,j) = eta(ip1,j) + 1 eta(i,jm1) = eta(i,jm1) + 1 eta(i,jp1) = eta(i,jp1) + 1 eta(i,j) = eta(i,j) − d i s t r ( i , j ) topnum(i,j) = topnum(i,j) + 1 end subroutine toppleper

subroutine topplebins(eta, distr , topnum, L, i, j) !Boundaries: Sinks integer :: L, i, j integer :: eta(L,L), distr(L,L), topnum(L,L) i f ( i >1) eta ( i −1, j ) = eta ( i −1, j ) + 1 i f ( i1) eta ( i , j −1) = eta ( i , j −1) + 1 i f ( j

recursive subroutine stabilize(eta, distr , topnum, L, p, q,k)!, per) !Boundaries: Recursive subroutine integer :: L, p, q,k integer :: eta(L,L), distr(L,L), topnum(L,L) !logical :: per if (eta(p,q) >= distr(p,q)) then ! i f ( k>3∗L∗∗2) ! c a l l p l o t a v a ( eta , L) !call sleep(1) !call sleep(1) !if (per) then ! call toppleper(eta, distr ,topnum, L, p, q) ! call stabilize(eta, distr ,topnum, L, modulo(p,L)+1, q, per) !call stabilize(eta, distr ,topnum, L, modulo(p−2,L)+1, q, per) !call stabilize(eta, distr ,topnum, L, p, modulo(q,L)+1, per) ! call stabilize(eta, distr ,topnum, L, p, modulo(q−2,L)+1, per) ! e l s e call topplebins(eta, distr ,topnum, L, p, q) i f (p1) call stabilize(eta, distr ,topnum, L, p−1, q,k)!, per) i f (q1) call stabilize(eta, distr ,topnum, L, p, q−1,k ) ! , per ) end i f ! end i f end subroutine stabilize

subroutine rand eta(eta, L, hmax)!Generate random height configuration with max height hmax integer :: hmax, L real(8) :: etau(L,L) integer :: eta(L,L)

71 B CODE

c a l l random number(etau) eta = int(hmax∗ etau ) end subroutine rand eta

subroutine init distr(distr ,L,p,q,r) !p,q,r represent the relative frequencies of normal, sink and source sites respectively i n t e g e r : : L real(8) :: p,q,r,b1,b2,distru(L,L) integer :: distr(L,L) b1 = p/(p+q+r) !uniform distribution b2 = (p+q ) / ( p+q+r ) c a l l random number(distru) where (distru =b1.AND. distru =b2) distr = 3 !source end subroutine init d i s t r

subroutine init f i g (L) i n t e g e r : : L call initplot(’lightblue ’, 1800,1800, ’plot.ps’, 1) call framing(−dble (L) , −dble(L), dble(L), dble(L)) call putstartbutton() call putstopbutton() call initbirdseye(1.0 8 , 0 . 5 8 , 1 . 0 8 ) end subroutine init f i g

subroutine init bound(bound, L) i n t e g e r : : L integer :: bound(L,L) bound = 0 bound(1:L,1)=1 bound(1:L,L)=1 bound(1 ,1:L)=1 bound(L,1:L)=1 end subroutine init bound

subroutine plot e t a ( eta , L) i n t e g e r : : L integer :: eta(L,L) call setnamedbackground(’lightblue ’) call d3drawsurf(dble(eta), L, L) end subroutine plot e t a

subroutine plot a v a ( eta , L) i n t e g e r : : L integer :: eta(L,L) call setnamedbackground(’white ’) call d3drawsurf(dble(eta), L, L) end subroutine plot a v a end program ASMeff

72 B CODE

The Matlab script shown below analyses the avalanche size data generated by ASMeff.f90 using the BTW method. %% Script to read data ASM c l c ; c l e a r a l l ; %time = 1:1e7; %% Data import [ fileID ,msg] = fopen(’/media/henk/Windows/Users/henkj 000 /MATLAB/ bin /80 9 9 1.txt’,’r’); formatSpec = ’%f ’; data = fscanf(fileID , formatSpec); %% Constants dependent on data L = 80; %Lattice Dimensions N = 10000000; %Number of particles added crit = round(5∗Lˆ2); %By observation %% Order data avsizes = data(1:N); avsites = data(N+1:2∗N); boundary = data(2∗N+1:3∗N); azc = avsizes(crit:end); %Consider critical state atc = avsites(crit:end); abc = boundary(crit :end); azc = azc(azc >0); %Only consider stricly positive avalanche sizes atc = atc(atc >0); abc = abc(abc >0); %% Initialize Plots xz = unique(azc); xt = unique(atc); xb = unique(abc); %List unique array elements Nz = numel(xz); Nt = numel(xt); Nb = numel(xb); quantz = zeros(Nz,1); quantt = zeros(Nt,1); quantb = zeros(Nb,1); f o r k = 1 : Nz quantz(k) = sum(azc==xz(k)); %Create histograms if (mod(k,( floor(Nz/20))) == 0) %Monitor progress disp ( k ) end end f o r k = 1 : Nt quantt(k) = sum(atc==xt(k)); if (mod(k,(floor(Nt/20))) == 0) disp ( k ) end end f o r k = 1 :Nb quantb(k) = sum(abc==xb(k)); end

%% Plots it=[1:1:N] ’; part = [1:3e5]’; f i g u r e ( 1 ) ; title(’Avalanche size evolution in time’) subplot(3,1,1)

73 B CODE

plot(it(part),avsizes(part),’k’) xlabel(’i ’) ylabel(’t’) subplot(3,1,2) plot(it(part), avsites(part),’b’) xlabel(’i ’) ylabel(’s’) subplot(3,1,3) plot(it(part), boundary(part),’r’) xlabel(’i ’) ylabel(’b’) hold o f f

f i g u r e ( 2 ) loglog(xz,quantz./sum(quantz)); title(’L = 80, N=10.000.000, dissipative ’) y l a b e l ( ’ f T ( t ) ’ ) xlabel(’Total toppling numbers T’) g r i d on

f i g u r e ( 3 ) %dim= [.5 .6 .3 .3]; % s t r = ’ f S ( s ) ˜ sˆ{−\tau } , \ tau \ approx 0.9912’; % annotation(’textbox ’,dim,’String ’,str ,’FitBoxToText’,’on’); loglog(xt ,quantt./sum(quantt)); title(’L = 80, N=10.000.000, dissipative ’) y l a b e l ( ’ f S ( s ) ’ ) xlabel(’Avalanche size s’) g r i d on

f i g u r e ( 4 ) loglog(xb,quantb./sum(quantb)); title(’L = 80, N=10.000.000, dissipative ’) y l a b e l ( ’ f B (b ) ’ ) xlabel(’Number of falling grains from boundary b’) g r i d on

disp([mean(azc), mean(atc), mean(abc)])

The Matlab script shown below was used to make a maximum likelihood estimate of the critical exponent τ, as well as bootstrapping and Kolmogorov-Smirnov testing.

%maxlikelihood .m c l c ; c l e a r a l l ; %time = 1:1e7; %% Data import [ fileID ,msg] = fopen(’/media/henk/Windows/Users/henkj 000/MATLAB/bin/80N. txt ’ , ’r ’); formatSpec = ’%f ’; data = fscanf(fileID , formatSpec); %% Constants dependent on data

74 B CODE

L = 80; %Lattice Dimensions N = 10000000; %Number of particles added crit = round(2.2 ∗Lˆ2); %By observation %% Order data avsites = data(N+1:2∗N); %extract avalanche sizes from data atc = avsites(crit:end); %discard build −up phase atc = atc(atc>0&atc=j); %empirical survival function disp ( j ) end demp = max(abs((Nj./M−Semp ) ) ) ; toc %% Generate data Q = 100∗Lˆ2; P = 100; %Bootstrap size Nsim = zeros(Q,P); U = rand(Q,P); %Uniform numbers X = floor(0.5.∗(1 −U).ˆ( −1/( t a u e s t −1))+0.5); %inverse transformation U to power−law t i c f o r r =1:Q f o r s =1:P i f X( r , s)>max( atc ) while X(r,s)>max( atc ) Us = rand ; X(r,s) = floor(0.5.∗(1 − Us).ˆ( −1/( t a u e s t −1))+0.5); %inverse transformation U to power−law end end end end toc lnGboot = zeros(P); tauboot = zeros(P); %preallocate dsim = zeros(P,1);

%% Calculate tau for data for i=1:P %bootstrapping lnGboot(i) = sum(log(X(: ,i)),1)./Q; loglb = @(tau) log(zeta(tau)) + tau ∗lnGboot(i ); tauboot(i) = fminbnd(loglb ,0.8 ,2); Ssim = zeros(max(X(:,i)),1); c = zeta(tauboot(i)); Nk = zeros(max(X(:,i)),1); for k=1:max(X(: , i))

75 B CODE

Ssim ( k ) = ( c − sum ( [ 1 : k−1].ˆ( − tauboot(i))))./c; %simulated survival function Nk(k) = sum(X(: ,i)>=k ) ; %disp ( k ) end dsim(i) = max(abs((Nk./Q−Ssim))); %KS statistic disp ( i ) end

p = sum( dsim>=demp)/P; %p−value

%% Plots smax = max(atc); plot (1:smax,Nj./M, ’k − ’,1:smax,Semp, ’k−−’, 1:smax, abs((Nj./M−Semp ) ) , ’ k −.’) title(’Empirical/Fitted survival function ’) xlabel(’s’) ylabel(’Survival ’) legend ( ’ M s/M’ , ’ S { e s t }( s ; \ tau ) ’ , ’ D {emp} ’) axis([1,smax,0 ,1]) g r i d on

The script below is capable of generating a simple DLA model that shows self-similarity.

program DLA2

implicit none

real(8), parameter :: R = 200d0 !real(8), parameter :: sc = 0.1 8 !sticking coefficient real(8), parameter :: pi = 4∗ atan ( 1 . 0 8 ) integer , parameter :: N = 20000000 integer , dimension(− int(R): int(R), −int(R):int(R)) :: map = 0 integer :: x, y, k, xs, ys real(4) :: dir, angle, start, finish, spp logical :: alive, kicking, full c a l l i n i t fig(nint(R)) call setnamedbackground(’lightblue ’) call drawcircle(R−2) call setpoint(0d0,0d0) map( 0 , 0 ) = 1 full=.false. c a l l cpu time(start)

do k=1,N c a l l random number(angle) x = nint ( (R+0.5 8 )∗ cos (2∗ pi ∗ angle)) !Depart from random location y = nint ( (R+0.5 8 )∗ s i n (2∗ pi ∗ angle ) ) kicking=.true. alive=.true.

do while (alive.and.kicking) c a l l random number(dir)

76 B CODE

i f ( d i r < 0.25 8) then !Simple random walk x = x+1 e l s e i f ( d i r < 0 . 5 8 ) then y = y+1 e l s e i f ( d i r < 0.75 8 ) then x = x−1 e l s e y = y−1 end i f

i f ( x∗∗2+y∗∗2 >= R∗∗2) then alive=.false. !RW outside circle !Kill particle EXIT e l s e i f (map( x+1,y)+map( x , y+1)+map(x−1,y)+map(x,y−1) >= 1) then

if (map(x+1,y) .eq. 1) then kicking=.false. !Particle aggregated xs = x+1 ys = y else if (map(x,y+1) .eq. 1) then kicking=.false . xs = x ys = y+1 else if (map(x−1,y) .eq. 1) then kicking=.false . xs = x−1 ys = y e l s e kicking=.false . xs = x ys = y−1 end i f end i f end i f end do if (.not.kicking) then !Aggregate map( x , y ) = 1 call setpoint(dble(xs), dble(ys)) call drawto (dble(x), dble(y)) i f ( x∗∗2+y∗∗2>=0.95∗R∗∗2) full=.true. end i f if (full) EXIT end do c a l l cpu time(finish) p r i n t ∗ , k p r i n t ∗ ,sum(map)

77 B CODE p r i n t ∗ , f i n i s h −s t a r t call endplot() c o n t a i n s

subroutine init f i g (L) i n t e g e r : : L call initplot(’lightblue ’, 1800,1800, ’plot.ps’, 1) call framing(−dble (L) , −dble(L), dble(L), dble(L)) call putstartbutton() call putstopbutton() call initbirdseye(1.0 8 , 1 . 0 8 , 1 . 0 8 ) end subroutine init f i g

subroutine drawcircle(R) real(8) :: R,p,q integer :: num, l num = nint(8d0∗R) call setpoint(dble(R), dble(0)) do l =0,num p = R∗ cos (2 d0∗ pi ∗ dble(l)/dble(num)) q = R∗ s i n (2 d0∗ pi ∗ dble(l)/dble(num)) call drawto (dble(p), dble(q)) end do end subroutine drawcircle end program DLA2

78