UPPSALA DISSERTATIONS IN MATHEMATICS 117
Selected Topics in Continuum Percolation Phase Transitions, Cover Times and Random Fractals
Filipe Mussini
Department of Mathematics Uppsala University UPPSALA 2019 Dissertation presented at Uppsala University to be publicly examined in Häggsalen, Ångströmlaboratoriet, Lägerhyddsvägen 1, Uppsala, Thursday, 24 October 2019 at 13:15 for the degree of Doctor of Philosophy. The examination will be conducted in English. Faculty examiner: Hermine Biermé (Université de Poitiers, Laboratoire de Mathématiques et Applications).
Abstract Mussini, F. 2019. Selected Topics in Continuum Percolation. Phase Transitions, Cover Times and Random Fractals. Uppsala Dissertations in Mathematics 117. 54 pp. Uppsala: Department of Mathematics. ISBN 978-91-506-2787-9.
This thesis consists of an introduction and three research papers. The subject is probability theory and in particular concerns the topics of percolation, cover times and random fractals. Paper I deals with the Poisson Boolean model in locally compact Polish metric spaces. We prove that if a metric space M1 is mm-quasi-isometric to another metric space M2 and the Poisson Boolean model in M1 features one of the following percolation properties: it has a subcritical phase or it has a supercritical phase, then respectively so does the Poisson Boolean model in M2. In particular, if the process in M1 undergoes a phase transition, then so does the process in M2. We use these results to study phase transitions in a large family of metric spaces, including Riemannian manifolds, Gromov spaces and Caley graphs. In Paper II we study the distribution of the time it takes for a Poisson process of cylinders to cover a bounded subset of d-dimensional Euclidean space. The Poisson process of cylinders is invariant under rotations, reflections and translations. Furthermore, we add a time component, so that one can imagine that the cylinders are “raining from the sky” at unit rate. We show that the cover times of a sequence of discrete and well separated sets converge to a Gumbel distribution as the cardinality of the sets grows. For sequences of sets with positive box dimension, we determine the correct speed at which the cover times of the sets An grows. In Paper III we consider a semi-scale invariant version of the Poisson cylinder model. This model induces a random fractal set in the vacant region of the process. We establish an existence phase transition for dimensions d ≥ 2 and a connectivity phase transition for dimensions d ≥ 4. An important step when analysing the connectivity phase transition is to consider the restriction of the process onto subspaces. We show that this restriction induces a fractal ellipsoid model in the corresponding subspace. We then present a detailed description of this induced ellipsoid model. Moreover, the almost sure Hausdorff dimension of the fractal set is also determined.
Keywords: Poisson point process, Percolation, Boolean model, Quasi-isometries, Cover times, Poisson cylinder process, Ellipsoid process, Phase transition, Random fractals
Filipe Mussini, Department of Mathematics, Analysis and Probability Theory, Box 480, Uppsala University, SE-75106 Uppsala, Sweden.
© Filipe Mussini 2019
ISSN 1401-2049 ISBN 978-91-506-2787-9 urn:nbn:se:uu:diva-392552 (http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-392552) If you’re having math problems I feel bad for you, son I got ninety nine problems but a thesis ain’t one. Hit me!
List of papers
This thesis is based on the following papers, which are referred to in the text by their Roman numerals.
I Cristian F. Coletti, Daniel Miranda and Filipe Mussini. Invariance Under Quasi-isometries of Subcritical and Supercritical Behavior in the Boolean Model of Percolation. Journal of statistical physics, 162(3), 685-700, 2016.
II Erik I. Broman and Filipe Mussini. Random cover times using the Poisson cylinder process. Accepted for publication in the Latin American Journal of Probability and Mathematical Statistics, 2019.
III Erik I. Broman, Olof Elias, Filipe Mussini, and Johan Tykesson. The fractal cylinder process: existence and connectivity phase transition. Manuscript submitted for publication on Jul 2019
Reprints were made with permission from the publishers.
Contents
1 Introduction ...... 11 1.1 Fractal geometry ...... 11 1.2 Percolation ...... 14 1.2.1 Discrete Percolation ...... 15 1.2.2 Continuum percolation ...... 22 1.2.3 Fractal percolation ...... 25 1.3 Coverage processes ...... 27 1.3.1 The vacant region ...... 27 1.3.2 Cover times ...... 29
2 Formal introduction ...... 31 2.1 Fractal Geometry ...... 31 2.2 Percolation ...... 33 2.2.1 Discrete percolation ...... 33 2.2.2 Continuum percolation ...... 35 2.3 Coverage processes ...... 37 2.3.1 The vacant region ...... 38 2.3.2 Cover times ...... 41
3 Summary of papers ...... 43 3.1 Paper I ...... 43 3.2 Paper II ...... 44 3.3 Paper III ...... 46
4 Summary in Swedish - Sammanfattning på svenska ...... 49
5 Acknowledgements ...... 51
References ...... 53
Foreword
This thesis consists of four introductory chapters followed by three papers. In Chapter 1, the reader will find an informal explanation of the topics studied in the thesis. The focus there is to motivate the subject while omitting most mathematical details. In Chapter 2, the mathematically curious reader can find more detailed explanations of the topics discussed in Chapter 1. Chapter 3 consists of a summary of the papers and Chapter 4 is a summary of the thesis in Swedish. Finally, the reader will find the papers themselves. These papers constitute the majority of the thesis.
9
1. Introduction
This thesis concerns three topics in probability theory: random fractal geom- etry, percolation and cover times. In this chapter we will give an informal introduction to each of them. The reader will find examples and a description of some of the most interesting classical results illustrating these topics, in a casual way. The mathematical formalism of the models will be discussed in the next chapter.
1.1 Fractal geometry The study of fractals tracks back to the 17th century, when scientists (Newton and Leibniz, among others) were developing the theory of differential calcu- lus. Much of this theory was developed for continuous and smooth functions. However, some examples of what we today know as fractals were developed as pathological counter-examples to calculus theorems. In the beginning of the 19th century, the first example of a nowhere differentiable continuous curve was constructed. Later, versions of what today is called the Cantor set were developed. It was only by the end of the 19th and beginning of the 20th cen- tury that the systematic study of such objects was conducted by Hausdorff, von Koch and Sierpinski,´ to name a few. The term fractal was first used to describe such objects by Mandelbrot in the 70s. In 1977, Mandelbrot [22] published a book that puts fractals into the spotlight as geometric objects present in many natural phenomena. Mandel- brot’s work inspired the work of many mathematicians to later develop what we know today as fractal geometry. Moreover, due to the connection that frac- tal geometry has with nature, it started being used in more applied sciences, such as physics and biology (see [25] and [8], for respective examples). Frac- tals quickly became famous to the general public due to the easily generated beautiful pictures. It will be illustrative to consider a simple concrete example. The example we choose to explore is the famous Cantor set, and it is constructed as follows: start with the interval [0, 1]. Then, delete the middle third of the interval to obtain two disjoint intervals, [0, 1/3] and [2/3, 1]. For each of these smaller intervals, delete the middle third again, resulting in four intervals, [0, 1/9], [2/9, 1/3], [2/3, 7/9] and [8/9, 1]. Continue this deletion process ad infinitum and denote by C the points that are never deleted. See Figure 1.1 for a sketch
11 Step 1 Step 2 Step 3 Step 4 Step 5
Figure 1.1. The first five iterations in the construction of the Cantor set.
of the described deletion procedure. The set C is called the Cantor set and has many remarkable properties. It is of course natural to ask whether C is empty, i.e. whether we delete every point of [0, 1] at some stage in this process. However, by the deleting process described above, it is easy to see that the endpoints of each interval are never removed. For example, the points {0, 1/9, 2/9, 1/3} are in C, and we can conclude that C is non-empty. More surprisingly, it is not so difficult to show that the cardinality of C is the same as the cardinality of [0, 1], i.e., there is a pairing of the points in C to the ones in the whole interval [0, 1]. From the construction algorithm we can also see that given any two distinct points in C we can find a third point between those two that is not in C. This implies that the Cantor set does not contain any intervals larger than single points, or in other words, that the Cantor set is totally disconnected. Another interesting phenomenon takes place when we scale the Cantor set by a factor of 1/3. After comparing the shrunken version with the original set, one notices that the shrunken version is exactly the same as the “left” compo- nent of the original one. Moreover, the “right” component of the original set is also equal to a translate of the shrunken version (see Figure 1.1). In other words, the Cantor set is equal to two shrunken copies of itself. This property is called self-similarity and it is typical for fractals. It is important to mention that there are weaker versions of self-similarity. For instance, we can adapt the construction of the Cantor set by, at each step, removing a random subinterval instead of the middle third. This will again result in a totally disconnected set where the shrunken versions are not exact copies of the original set, but qualitatively similar. In this case, we say that the sets are statistically self-similar, since both the shrunken version and the smaller components have the same statistical properties. Another common feature of fractals is their non-integer dimension. There are different notions of dimension for fractals, and in this thesis we use two: the Hausdorff dimension and the box-counting dimension. Hausdorff dimen- sion is a complicated concept and we therefore defer the description of it to Chapter 2. In order to illustrate the concept of box-counting dimension, we will demonstrate how it is used for a circle of radius 1. Start by plotting the circle into a grid where each box has side length δ (see Figure 1.2). Next,
12 (a) (b)
(c) (d)
Figure 1.2. The number Nδ for different values of δ. (a): δ = 0.5 and Nδ = 12; (b): δ = 0.25 and Nδ = 28; (c): δ = 0.125 and Nδ = 60; (d): δ = 0.05 and Nδ = 164.
count the number of boxes that the curve touches and denote this number by Nδ. Take a finer grid, that is, a grid where the side length of the boxes are smaller than δ (say δ/2) and repeat the process. In each iteration i, you should obtain two numbers: the side length of the boxes and the number of boxes touched by the curve, here denoted by δi and Nδi , respectively. An example of the first iterations can be found in Figure 1.2. As δ approaches zero, the −1 relationship between Nδ and δ becomes clear: Nδ ≈ cδ for some constant c. We then say that the circle has box-counting dimension 1, which is the negative of the exponent in the equation. In general, we say that a set has −s box-counting dimension s when the equation Nδ ≈ cδ holds in the limit as δ goes to zero.
13 (a) (b)
Figure 1.3. Some more examples of fractals. (a) The Von Koch snowflake: Start with an equilateral triangle. In each iteration, divide each side into three segments of equal length. Draw another equilateral triangle that has the middle segment as base, pointing outward and delete the base segment of the new triangle; (b) The Sierpinski´ triangle: Start with a filled equilateral triangle. In each iteration, divide each triangle into four smaller congruent equilateral triangles and remove the area of the triangle pointing downward.
1.2 Percolation Percolation has its origins in the 1950s, when Broadbent and Hammersley [4] introduced it as a model for the study of flow of liquids in random environ- ments. In order to illustrate this, imagine a cube of porous material and that some liquid is poured on top of it. The authors were interested in the question whether the liquid would eventually leak through to the bottom, and in par- ticular how this depends on the proportion of open pores. This easy thought experiment illustrates the basic idea behind what later developed into a whole new mathematical theory, with applications not only in material science, but also in areas such as epidemiology and computer networks, and more recently, to the internet of things. In order to illustrate how percolation can be applied to real life situations, we will briefly discuss a recent study related to traffic. Nowadays, modern cars can be connected to the internet. This technology allows the car to have instantaneous traffic information and can reduce the number of accidents, for example, but also leaves the vehicle susceptible to cyber attacks. Vivek, Yanni, Yunker and Silverberg [34] studied one of the many effects that cyber attacks could have on traffic. In their model, they assumed that hackers would ran- domly select cars and cause them to halt simultaneously, possibly blocking an entire area. Using simulations, they calculated that if approximately 15% of the cars in the Manhattan area in New York were hacked, the traffic network would be split into several disconnected clusters. In other words, after the hack, a driver will become trapped in its current cluster, unable to visit other
14 parts of the city. If approximately 20% of the cars were hacked, then the traffic would stop completely. There is a natural connection between the example above and the problem originally studied by Broadbent and Hammersley in [4]. The street network can be viewed as a porous material and the traffic flow as a liquid imbuing the material. A street where the cars can drive freely represent an open pore where the liquid can flow in the material. Blocked streets, on the other hand, represents the closed pores, where the liquid cannot pass. As a mathematical field, percolation theory brought new problems whose solutions required elegant and interesting techniques, still being improved to- day (see for instance [10] for a remarkable recent improvement of a classical result). Moreover, it inspired the development of many different versions of the problem with a similar taste, such as continuous and fractal percolation, leading to the development of even more refined techniques. Percolation was also the central subject for research resulting in two Fields medals. In 2006 the Fields medal was awarded to Werner for his work with Lawler and Schramm on Schramm-Loewner evolution and in 2010 it was awarded to Smirnov who, among other things, proved the so-called Cardy’s formula, a conjecture in theoretical physics posed in 1992. We will discuss their work briefly, but first we will explain a few percolation models in some more details.
1.2.1 Discrete Percolation We will begin with the model introduced by Broadbent and Hammersley and refer to it as discrete percolation. In this subsection, we will illustrate the model informally and refer to Chapter 2 (and the references therein) for a more formal description. We would like to stress that, even though the examples here discussed are two-dimensional, the models can be extended to higher dimensions without problems. The same cannot be said about the results, where sometimes completely new proofs and ideas are required. The model can be considered in a finite or an infinite setting and we will start by considering the finite case first. Take a square lattice, as the one illus- trated in Figure 1.4(a). Now, pick your favourite coin and flip it once for each edge. Remove the edge if the coin lands heads or keep the edge if the coin lands tails, see Figure 1.4(b) for a possible realization. Proceed by painting each vertex on the top face of the lattice blue and continue painting blue all the vertices which still have an edge connecting them to another blue vertex and repeat until there are no unpainted vertices connected to blue vertices (see Figure 1.4(c)). In the liquid flow analogy, the lattice represents a porous material such as a rock. Each vertex is a pore and each edge represents a connection where the liquid can (possibly) flow. Removing an edge means that we consider it
15 (a) (b)
(c) (d)
Figure 1.4. (a): A portion of the square lattice; (b) The same portion of the square lattice after randomly removing some edges; (c) The ’flooded’ vertices in picture (b) are represented in blue. Note that here we have a top-to-bottom open crossing; (d) Try it yourself! Pick your favourite coin and repeat the process, but drawing edges instead. Check if your copy of the thesis contains a crossing.
16 “closed”, whereas keeping an edge means that we consider it “open”. That is, the liquid cannot flow through closed connections, but can flow freely through open connections. The blue vertices are therefore pores filled with liquid. If there is a blue vertex in the bottom face of the lattice, this means that the liquid can flow through the material. We call this an open crossing and such an example occurs in Figure 1.4(c). Now consider the infinite lattice, i.e., a lattice similar to the one in Figure 1.4(a) but that extends infinitely in all directions. Repeat the edge deleting algorithm described in the finite case. The picture would be something similar to Figure 1.4(b), but infinite. We will refer to the open connected components of the resulting lattice as clusters. The liquid flow analogy also works for the infinite case. Pick your favourite vertex and paint it blue. Then proceed by painting all the vertices connected to it with an open edge blue, and repeat until all vertices in the open cluster of the blue vertex are coloured. The first blue vertex represents a source of liquid and the other blue vertices represents pores filled with the liquid. We are now interested in whether the liquid can flow forever, or in other words, if the open cluster of the source is unbounded. If there exists such a vertex anywhere in the infinite lattice, we say that percolation occurs, or alternatively, that the model percolates. The model described above is a version of bond percolation. Many as- pects of this model has been studied in depth by a large number of mathemati- cians. Next, we will take a closer look at some aspects of this bond percolation model. The first one is the effect your coin has on the macroscopic behaviour of the system. By the description above, if you have a coin with two tail faces, then every edge would remain (and therefore the water would flow and the system percolates). On the other hand, if you have a coin with two head faces, every edge would be deleted and the system would not percolate. Consider now a biased coin, namely a coin that lands on tails with probability p and that lands on heads with probability 1 − p, where p is a number strictly larger than zero and strictly smaller than 1. It is intuitively clear that the probability that the system percolates increases as p increases, and it is natural to ask “what is the smallest p such that percolation occurs with positive probability?”. We will denote the smallest such p by pc. In the case of discrete bond percolation, one can show that such pc exists and is non-trivial, that is, pc is neither 0 nor 1. This phenomenon is called a phase transition: for values of p smaller than pc, there is no percolation, whereas for values of p larger than pc, percolation occurs with positive prob- ability. In fact, there is a so-called 0-1 law for the percolation event, meaning that if the probability of percolation occurring is positive, then the probability must be equal to 1. For p > pc there is, therefore, at least one unbounded open cluster. It is then natural to ask whether there may be more than one. Indeed, a priori, we
17 Figure 1.5. The triangle lattice
could even have an infinite number of unbounded open clusters. This question was resolved incrementally in a series of papers culminating in a theorem by Burton and Keane [6] showing that when the unbounded open cluster exists, it is unique. We can collect the results just described in the following theorem: Let p denote the probability of keeping an edge in the bond percolation model described above. Then there exists a non-trivial critical probability pc such that i) For all p < pc there is no unbounded connected open cluster and; ii) For all p > pc there exists a unique unbounded connected open cluster. It is important to point out that even though we defined and discussed the model in the case of a two-dimensional square lattice, the above theorem still holds for higher dimensional cubic lattices. Next, we will discuss some ques- tions whose answer depends on the dimension of the lattice. We start with the numerical value of pc. For the square lattice, it is known that pc = 1/2, which was proved by Kesten in [17]. For higher dimensional lattices, including the important case d = 3, only estimates for pc are known (see [35]). Other than in very special cases, it is believed to be impossible to find exact values for pc. Examples of such special cases include the square and the triangular lat- −1 tice (Figure 1.5). However, it is known that pc ∼ (2d) asymptotically as d → ∞ for cubic lattices. So far we know the picture for values of p smaller than pc and for values of p greater than pc, but how does the system behave when p = pc? This is known as the critical phase, and again, the answer may depend on the dimension. It is known that, at the critical phase, percolation does not occur when d = 2 or d ≥ 11 (see [15] and [13]). The techniques used in the proof of the case
18 d = 2 are fundamentally different from those used in the case d ≥ 11. It is heavily conjectured that percolation does not occur at the critical phase in any dimension. Consider now a different version of this process. Take a finite hexagonal tiling where each hexagon has side length δ (see Figure 1.6(a)). For each hexagon, flip a coin and paint it black if it lands heads or leave it blank if it lands tails. Paint all the blank hexagons on the top face of the lattice blue and continue to paint blue every blank hexagon neighbouring a blue hexagon. Do this until there are no blank hexagons neighbouring a blue hexagon. This process is an example of site percolation, and it shares many similarities with the bond percolation process described before. If there is a blue hexagon in the bottom face we say that a top-bottom crossing has occurred. Of course, the probability of having a crossing depends on the probability p of painting a hexagon black. Moreover, the crossing probability depends on the specific arrangement of the tiling you are interested in. For instance, let us compare a square piece of the tiling with a rectangular one, i.e., compare a piece of the tiling with the same number of hexagons in each side with a piece of the tiling having a shorter side (as illustrated in Figure 1.7). Clearly, a top-bottom crossing is easier to occur in a rectangular part as the one in Figure 1.7, since it requires less sites to be open. Due to applications in, e.g. ferromagnetism, it is of great physical impor- tance to determine the crossing probability when p = 1/2, that is, at the crit- ical point of percolation in the infinite case. Calculating this crossing proba- bility becomes specially important in the limit case when δ approaches zero (as indicated in Figure 1.8) and for general shapes, i.e. not just squares and rectangles. The limit case when δ → 0 is known as the scaling limit. In 1992, Cardy conjectured a formula to calculate the crossing probability in the scaling limit (see [7] for more details). In the late 1990s, Schramm introduced and initiated the study of a family of conformally invariant random fractal curves referred to as SLEκ. SLE is an acronym for stochastic Loewner evolution (nowadays referred to as Schramm- Loewner evolution), and κ is a parameter that controls the qualitative be- haviour of the SLE-curves (one can think of κ as a parameter that controls the roughness of the curve). Already in the 1970s, it was conjectured by physi- cists (see [26]) that the scaling limit of many critical processes in the plane should be conformally invariant in the sense that they could be described by quantum field theories with conformal symmetries (so-called conformal field theories, or CFTs). However, a precise mathematical meaning was lacking. Schramm focused instead on describing the random simple curves formed by looking at interfaces between clusters of different colours. He showed that if one assumes conformal invariance and a special Markovian property satisfied in many cases of interest, then, roughly speaking, the only possible scaling limit of an interface is an SLE process. The exact value of κ would depend on the specific model for which the scaling limit was taken.
19 (a) (b)
(c) (d)
Figure 1.6. (a) The hexagonal lattice; (b) The hexagonal lattice after randomly filling some hexagons; (c) The liquid cannot flow through to the bottom; (d) An example where there is a both top-bottom and a right-left crossing.
(a) (b)
Figure 1.7. The probability of a top-to-bottom crossing in (b) is higher than the prob- ability of a top-to-bottom crossing in (a)
20 W
W
(a) (b)
Figure 1.8. The hexagons on the top face and to the left of W are painted blue, while the hexagons on the top face and to right of W are painted black. The red curve is the interface curve. (a) δ = 0.5cm; (b) δ = 0.25cm.
The foundations of the theory of SLE processes was then laid in a series of papers by Lawler, Schramm and Werner (see [18] and [27] for examples). The family of SLE processes (and related discrete models) is by now a rich mathematical subject born from the mixture of probability theory and complex analysis. Let us consider the scaling limit of the model described above (i.e. the model with hexagons). Start with a rectangular piece of the hexagonal grid, as the one in Figure 1.7(b), and perform the site percolation process explained previously. Additionally, let W denote the middle point of the top face and paint all hexagons to the right of W black, and all hexagons to the left of W blue. Next, starting from W, explore the edges of the grid, keeping a black hexagon to your left and a blue hexagon to your right (see Figure 1.8 for an il- lustration). Keep exploring until you reach another face of the rectangle. The trajectory of such an exploration procedure is known as an interface curve. Lawler, Schramm and Werner proved in [19] that if the interface curve could be shown to be conformally invariant in the scaling limit, then it must be de- scribed by an SLE6 process, and furthermore they proved a continuum version of Cardy’s formula for SLE6. Later, in 2001, Smirnov [30] proved that Cardy’s formula was indeed cor- rect for the scaling limit of the two dimensional site percolation process de- scribed above. Surprisingly, Smirnov did not take the route suggested by Lawler, Schramm and Werner but instead proved Cardy’s formula from first principles. In fact, with a bit of additional work, Smirnov’s result implies convergence to SLE6.
21 T3
T2 T4 A B T1
Figure 1.9. When Alice calls Bob, her phone connects to T1, which connects to T2, and so on creating a chain of connections until T4, which then connects to Bob. The shaded area represents the Towers respective connection range.
1.2.2 Continuum percolation Similar results as some of the ones discussed in the previous section have been proven in other discrete settings. A characteristic common to those models is that the relative position of the nodes (vertices) were known, and that the only randomness comes from determining the state of the edges. Motivated by communication networks, Gilbert [14] introduced the random planar network model in 1961, now known as the Boolean model, or sometimes the Gilbert disc model. In the Boolean model, the position of the nodes are random and the nodes are connected if and only if they are closer than some pre-determined threshold value. The Boolean model can be used as a simplified model for a local cellphone network. When someone makes a phone call using a mobile phone, the phone connects to a cellphone tower which tries to connect to the receiver of the call. In many cases, the two ends of the call are not in range of the same tower, so the call is transmitted over a sequence of towers with overlapping ranges, as in Figure 1.9. In this example, each tower represents a node and has a pre- determined range. Cellphones in range of a tower can transmit and receive calls to that tower. Moreover, if the range of two different towers overlap, the towers can transmit the calls between themselves. Therefore, in order to successfully make a phone call, the two ends of the call must be inside the same connected range component.
22 o
Figure 1.10. A realization of the ink sprinkling process. Here, o denotes the origin.
The randomness in continuum percolation comes from the positioning of the nodes. The Boolean model can be seen as the continuous version of the bond percolation model discussed in the previous section, and it works as follows: imagine that we have a huge sheet of paper and randomly sprinkle ink on it from above. For simplicity, assume that each ink drop creates a perfect ball of radius 1. In this experiment we do not control where the ink drops land. Nevertheless, we can control the intensity by which we sprinkle the ink, here represented by λ. One can think of the intensity as being the expected number of balls that hit a unit square, so that large (small) values of λ typically means that we will see a large (small) number of balls hitting the unit square. Similar to the discrete case, we want to know if we can find a path inside the coloured area of the paper connecting the center of the paper to the border. See Figure 1.10. The ink sprinkling process partitions the space into two distinct regions, the coloured area, which we call the occupied region and the blank area, called the vacant region. Despite having analogous properties, the two regions require different techniques in order to be studied.
23 (a) (b)
Figure 1.11. (a) Angels and Devils by M.C. Escher. In hyperbolic space, every angel and devil has roughly the same size; (b) A realization of the Poisson Boolean model in hyperbolic space.
As in the discrete percolation model, we can consider the model in an in- finite sheet, and as before, the occupied region undergoes a phase transition. There is a critical intensity λc such that for all intensities λ < λc there are no unbounded occupied regions. On the other hand, for all intensities λ > λc there is a unique unbounded occupied region. Moreover, there is another crit- ical intensity λv such that for all λ < λv there is a unique unbounded vacant region and for all λ > λv there is no unbounded vacant region. It is known that in two-dimensional space the relation λc = λv holds (see [1]). In three or more dimensions, simulations indicate that λc < λv, so for all intensities λ ∈ (λc, λv) there are unbounded components both in the occupied and in the vacant region, but a proof is still missing. Furthermore, little is known about the behaviour of the model at the critical intensities λ = λc or λ = λv, even less than what is known in the discrete case. One can change the model described above and obtain some quite inter- esting variants of it. For instance, we can consider the Boolean process in hyperbolic space. In hyperbolic space (see Figure 1.11), the way we measure distances (called metric) is different, so that every ball in Figure 1.11(b) has the same size using this new metric. The image gets distorted because we are representing it in the usual Euclidean space (with the usual metric). In the Boolean model of percolation in hyperbolic space it is known that λc < λv even in two dimensions (see [33]). Another aspect we can change is the geometry of the ink balls. A major part of this thesis considers the Poisson cylinder model, a model similar to the one described above, but instead of randomly sprinkling the paper with ink, we randomly draw infinitely long rectangles (or cylinders in higher dimensions).
24 This drastically changes the problem. When one consider the ball process, events occurring in bounded and distant regions are independent, since the balls are bounded. This is not the case with cylinders. This phenomenon is called infinite range dependencies and poses new challenges to the area. Furthermore, since each single cylinder is an unbounded connected compo- nent in itself, percolation occurs within a single cylinder, and therefore some of the questions we asked earlier become moot. In this thesis we study two facets of the cylinder model: cover times and the geometry of the vacant set. We will give more details in later sections.
1.2.3 Fractal percolation The last percolation model we will discuss in this section is a fractal percola- tion model, often known as Mandelbrot’s fractal percolation model (see [21]). The way this model is constructed can be described as follows: consider the square with side length 1 and split it into 9 equal squares, that is, 9 squares of side length 1/3. For each of these smaller squares, flip a coin and remove the square from the process if the coin lands on heads or keep the square if the coin lands on tails. A realization of the process can be seen in Figure 1.12.(b). Now split the remaining squares into 9 smaller squares and flip the same coin for each of the squares of side length 1/9 as before. Remove the smaller square if the coin lands on heads and do nothing otherwise. A realization is depicted in 1.12.(c). Repeat this process ad infinitum. This procedure will result in a (possibly empty) random set, which we will denote by M. Note the similarity with the construction of the Cantor set explained in Section 1.1, and in fact, M is sometimes referred to as a random Cantor set. We are interested in studying various properties of M. Chayes, Chayes and Durrett proved in [9] that the fractal percolation model undergoes several phase transitions. There is a critical probability pe (in fact, it is known that pe = 1/9) such that if the probability p of tails in your coin sat- isfies p ≤ pe then there will be nothing left with probability 1, i.e., the whole unit square will be removed. On the other hand, for p > pe, the remaining set is non-empty with positive probability. When M is not empty, it is known that M is statistically self-similar, a property common to random fractal sets described in Section 1.1. The set M undergoes a second phase transition. Chayes, Chayes and Durret also proved in [9] that there exists pc < 0.999 such that for all p ∈ (pc, 1) the random set M connects the left and right face of the original square with positive probability. The phase (pc, 1] is called the connected phase since for p ∈ (pc, 1], M will contain connected components with positive probability.
25 (a) (b)
(c) (d)
Figure 1.12. A realization of the fractal percolation model.
26 1.3 Coverage processes Coverage processes is a another topic studied in this thesis. In a nut shell, it works as follows: start with a fixed set (often large), which here will be called the base set, and several smaller sets, called the covering sets. Next, the covering sets are placed in random locations. In this thesis, we consider two facets of this process. The first is the properties of the uncovered set, that is, the points of the base set not touched by any covering sets. Recall, from Section 1.2.2 that the uncovered set is called the vacant region. Second, we consider a bounded base set and investigate how long we need to expose it to a rain of covering sets until it is fully covered. We call this the cover time. These topics are explained in more detail in the next subsections.
1.3.1 The vacant region We will start with a description of a model which is closely related to the one studied in this thesis. Using our ink and paper analogy, the fractal ball model can be described as follows: Imagine that the sheet of paper you have is infinitely large and that you have a sprinkling machine that can reach every- where on this sheet. Moreover, your sprinkling machine can control the size of the ink drops. Informally, (but slightly misleading) start by sprinkling drops of radius 1 with intensity λ. This means that, when exposing a unit square to the ink rain for one second, the expected number of drops falling inside it is λ. Then continuously reduce the radius of the drops and increase the intensity of the process proportionally in order to keep the mean area covered by the ink constant. So if you check the settings of your machine after some time and find that it is sprinkling drops of radius 1/2, then the intensity will be 4λ. This means that if you look at a bounded region of the paper you should find roughly four times as many balls of radius 1/2 than of radius 1. 1 The first steps of this process can be found in Figure 1.13. Continue this process on all smaller scales. If we start with a small enough λ, then not much of the paper will be covered in the beginning of the process and since the balls become smaller and smaller, it will in fact be the case that the paper is never fully cov- ered at all. On the other hand, if we start with a huge λ, the paper is quickly covered. We then ask for which values of λ the paper becomes fully covered. Moreover, when the paper is not fully covered, what does the uncovered part look like? This is an example of a coverage process. Coverage processes has its origins in the 1930s, with Stevens in [31]. In it, the author studied the number of “gaps” left in a circle with unit circumference after randomly placing identical arcs. In our definition, Stevens used a circle with unit circumference as a base set and n identical arcs of length a < 1 as
1Here, we have 4λ instead of 2λ because we are scaling 2-dimensional objects. To better see this, think about a square. How does the area of a square change when the sides of the square are halved?
27 Figure 1.13. A realization of the fractal ball model. The darker balls have larger radii.
covering sets. He then provided the probability distribution function for the number of gaps left in the circle after randomly placing the arcs. Different versions of this class of problems were later extensively studied, each bringing new challenges to the subject. Take as examples, Dvoretzky [11], where the author used as base set the circle and as covering sets infinitely many arcs of smaller and smaller lenghts. Later, Shepp [29] and Mandelbrot [20] studied the case where the real line is the base set and the covering sets are infinitely many intervals of random lengths. Some more examples are discussed in Section 2.3 and the references therein. More recently, Biermé and Estrade [3] studied the covering of the whole d-dimensional Euclidean space using balls of random radii as covering sets. They determined the existence of a critical intensity λ∗, such that for λ > λ∗ the vacant region is empty, that is, the whole space is covered by the balls. For λ < λ∗ the vacant set is non-empty. In this case, the authors calculated the Hausdorff dimension of this vacant set. An explicit value for λ∗ was also provided. In fact, their result is more general than what we state here. The fractal ball model was later further studied by Broman, Jonasson and Tykesson [5], where they determined the behaviour of the process at the criti- cal point λ∗. They proved that the process is in the empty phase when λ = λ∗.
28 (a) (b)
Figure 1.14. The base set is the red square and the covering sets are balls. (a) The red square was exposed to the ink rain for 10 seconds and is not covered; (b) The red square was exposed to the ink rain for 35 seconds and it is now covered;
1.3.2 Cover times Imagine now that we have a sheet of paper with a base set on it. Sprinkle some ink on the paper from above, like in the experiment discussed in Section 1.2.2 where each ink drop leaves an identical ball mark on the paper. Additionally, assume that the ink is raining from above with intensity λ =1, similar to what was done in the previous section, but this time the balls are always of the same size and the intensity is kept constant during the whole process. This means that the expected number of drops falling inside a unit square each second is equal to 1. We then expose the paper to the rain and measure the time until the picture is completely covered in ink. The cover time is then the random time needed to fully paint the picture. Again, here we cannot control where the ink balls land, only the rain intensity (here, set to be one). This process is illustrated in Figures 1.14(a) and 1.14(b). It is intuitively immediate that the cover time of the base set must depend both on the size of the base set and the sizes of the covering sets. Base sets with larger diameters require more covering sets in order to become covered, and therefore will have a larger cover time compared to sets of smaller diam- eters. On the other hand, if we use covering sets with larger diameters, each individual set will cover a larger proportion of the base set and less covering sets will be required in order to cover the base set. Thus in this case the cover time will be smaller. This is depicted in Figures 1.15(a) and 1.15(b). In the 1980s, Janson [16] studied the problem in a very general form, pro- viding theorems for any dimension and a large class of geometries of base and covering sets. The main result of Janson’s paper is how the cover time be- haves when the size of the covering sets shrinks to zero, or equivalently, when the size of the base set grows. He provided the asymptotic distribution of the cover time in this case.
29 (a) (b)
Figure 1.15. (a) The base set has side length 2 and each ball has radius 1. (b) The base set has side length 4 and each ball has radius 1.5. Both pictures were exposed to the same rain for 20 seconds. The cover times here will be smaller than the ones in Figure 1.14.
In a different setting, Belius [2] recently determined the asymptotic distri- bution for the cover time when the base set is a subset of the integer lattice and the covering sets are bi-infinite trajectories of simple random walks on the d dimensional lattice. The problem studied by Belius is substantially different from before, since the covering sets have infinite diameter. In his work, Belius used the random interlacement model earlier introduced by Sznitman in [32].
30 2. Formal introduction
In this chapter we will develop the topics discussed in Chapter 1 in more math- ematical detail. Here we will make precise definitions, state a few important theorems and when viable, sketch the proofs. The sections in this chapter mirror those of Chapter 1 and therefore we will refrain from explaining the concepts already established previously.
2.1 Fractal Geometry We will start by defining the lower and upper box-counting dimension. For d that, let A ⊂ R be a bounded set. Let Nδ(A) denote the smallest number of boxes of diameter equal to δ necessary to cover A. The lower and upper box dimension are defined by
log Nδ(A) log Nδ(A) dimBA = lim inf and dimBA = lim sup , δ→0 log(1/δ) δ→0 log(1/δ) respectively. By definition, we always have dimBA ≤ dimBA for all bounded sets A. When dimBA = dimBA we simply call it the box-counting dimension of A and write dimB A. The box-counting dimension is relatively easy to use, but it is not always the best choice. For instance, the box-counting dimension is invariant un- der closures, that is, dimB A = dimB A, where A denotes the closure of A (the smallest closed set containing A). This leads to some strange con- sequences when considering countable dense sets. Take as an example the T set Q = Q [0, 1]. Since Q is dense in [0, 1], we have that Q = [0, 1] and therefore dimB Q = dimB Q = dimB[0, 1] = 1. Nevertheless, we have that S dimB{q} = 0 for q ∈ Q. Thus, in general, dimB i Ai 6= sup dimB Ai for a countable family of sets (Ai)i≥0. However, this equality (called countable sta- bility) is considered natural, and it is therefore desirable to improve upon the definition of dimensions so that countable stability holds. The most common way this is resolved is by using the concept of Hausdorff dimension. This, of course, comes at an expense; The Hausdorff dimension is not as simple to use as the box-counting dimension. The Hausdorff dimension is defined as follows. Consider some set A. S A collection of sets (Ui)i≥1 is called a δ−cover of A if A ⊂ i Ui and diam(Ui) ≤ δ for all i ≥ 1. Here diam(Ui) = sup{d(x, y): x, y ∈ Ui}
31 is the diameter of the set Ui. Then, the α-dimensional Hausdorff measure of A is defined by
( ∞ ) α X α H (A) := lim inf diam(Ui) :(Ui)i≥1 is a δ-cover of A . δ→0 i=1
It is not difficult to show that the α-dimensional Hausdorff measure is in fact a measure. Furthermore, the Hausdorff measure is also non-increasing as a function of α, that is, Hα(A) ≤ Hβ(A) for all β ≤ α. Moreover, the α- dimensional Hausdorff measure assumes the values 0 or ∞, except possibly for one value of α. The Hausdorff dimension is then defined by
α α dimH(A) := inf{α > 0 : H (A) = 0} = sup{α > 0 : H (A) = ∞}, that is, the exact value of α where the α-dimensional Hausdorff measure of A “jumps”. It is important to mention that the Hausdorff dimension is not restricted to bounded sets, as the box-counting dimension is. Given a set A, assume we want to show that dimH(A) = h for some h ∈ (0, ∞). A naive approach to this problem relies on two parts. The first h is to show that H (A) < ∞ and thus by definition dimH(A) ≤ h. The sec- ond is to show that Hh(A) > 0, and again by definition we would have that dimH(A) ≥ h. In many cases, an upper bound to the Hausdorff measure comes naturally from the construction of the sets, due to self-similarity. On the other hand, determining a lower bound to the Hausdorff measure can be challenging even in the most simple examples, since one must consider all possible δ-covers. In order to tackle this problem, various techiniques have been developed. One common method to find a lower bound to the Hausdorff dimension is to use the so-called Frostman’s Lemma, a version of which we will state as follows (for a complete description see Falconer [12] or Matilla [23]).
Lemma 2.1.1. If there is a measure µ supported on A such that 0 < µ(Rd) < ∞ and ZZ dµ(y)dµ(x) α < ∞, (2.1) A×A |x − y| α then H (A) = ∞ and dimH A ≥ α.
Frostman’s lemma is often the only viable alternative to finding a lower bound to the Hausdorff dimension. Recall our discussion above regarding h attempts to prove that dimH(A) ≥ h. Then, instead of proving that H (A) > 0, we can find a measure µ satisfying the conditions of Lemma 2.1.1 for every α < h. Frostman’s lemma then implies that Hα(A) > 0 for all α < h and therefore dimH A ≥ h.
32 2.2 Percolation In this section we will develop the formal definitions of percolation in the dis- crete, continuous and fractal cases and present some relevant results for the scope of this thesis. For more information and detailed proofs, see Grimmett [15] for discrete percolation and Meester and Roy [24] for continuum perco- lation. When possible, we also present a sketch of the proofs.
2.2.1 Discrete percolation Let Zd be the graph whose vertex set V consists of the points in Rd with integer coordinates, and whose edge set E consists of the pairs {x, y} ∈ Zd whose Euclidean distance is 1. Let p ∈ [0, 1] and consider the product measure E Pp on the sample space Ω = {0, 1} given by Pp(ω(e) = 1) = p for all e ∈ E. Given a realization ω ∈ Ω, we will refer to the edge e as being open if ω(e) = 1 and closed otherwise. Recall that in Chapter 1 we claimed that there exists a non-trivial pc such that percolation occurs for all values of p > pc and does not occur for any values of p < pc. In order to understand this mathematically, we define a finite path of size n in Zd as an alternating sequence of vertices and edges x0, e0, x1, . . . , en−1, xn where x0, . . . , xn are distinct vertices and each edge ei satisfies ei = {xi, xi+1}. A circuit is a path such that x0 = xn. We call a path open if all its edges are open (or closed if all its edges are closed). Take the subgraph of Zd consisting of only the open edges. The connected components of this subgraph are called open clusters. We will write Cx for the open cluster containing the vertex x. We say that percolation occurs when d there exists some x ∈ Z such that Cx contains infinitely many vertices. Note that, by translation invariance of both the lattice and the probability measure Pp, the distribution of Cx is independent of the choice of x. For simplicity, we then usually investigate the open cluster of the origin Co. Let |Co| denote the cardinality of the Co. The percolation function is then defined as
θ(p) := Pp(|Co| = ∞). A central problem to percolation theory is to determine the existence of a non- trivial critical probability pc = pc(d) := sup{p : θ(p) = 0}. In other words, if there is a non-trivial value pc satisfying
= 0 if p < p ; θ(p) c > 0 if p > pc.
We have then the following theorem.
d Theorem 2.2.1. For the bond percolation process in Z we have that pc ∈ (0, 1) for every d ≥ 2.
33 The first observation one makes when proving Theorem 2.2.1, is that the critical probability is a decreasing function of the dimension. This is easily seen by considering a d − 1-dimensional sub-lattice inside a d-dimensional lattice. Of course, if the process percolates in the smaller lattice, it will also percolate in the larger one. We conclude that pc(d) ≤ pc(d − 1). Therefore, it suffices to show that pc(d) > 0 for d ≥ 2 and that pc(2) < 1. To show that pc(d) > 0 a common strategy is to count the number of open paths of length n starting at the origin. If the origin belongs to an infinite open cluster, then there exists open paths of every length starting at the origin. Thus the percolation probability function is bounded from above by the probability that there exists at least one open path of length n, for every n. More formally, let σ(n) denote the number of paths of length n starting at the origin and N(n) denote the open paths of length n starting at the origin. Since there are 2d possible choices for the first step and (2d − 1)n−1 choices for the next steps (backtracking is not allowed), we have that σ(n) ≤ 2d(2d−1)n−1. Moreover, a given path of size n is open with probability pn. Thus
2d θ(p) ≤ (N(n) ≥ 1) ≤ pnσ(n) ≤ (p(2d − 1))n, Pp 2d − 1 for every n. Furthermore, (p(2d − 1))n → 0 whenever p < 1/(2d − 1) and therefore pc(d) ≥ 1/(2d − 1). The argument to show that pc(2) < 1 is a little more involved. It exploits the fact that we are now dealing with a two dimensional lattice and can use the concept of planar duality. We define the dual of Z2 (sometimes denoted by (Z2)∗) as the lattice obtained by translating Z2 by the vector (1/2, 1/2) (see Figure 2.1(a)). In this way, it is easy to see that each edge e ∈ Z2 crosses a unique edge e∗ in the dual. Declaring the edge e∗ in the dual open or closed if its corresponding edge e in the original lattice is open or closed results in a bond percolation process in the dual with the same edge keeping probability as in the original process. It is then intuitive to see (but surprisingly difficult to prove) that the open cluster of the origin in Z2 is finite if and only if there is a closed circuit in the dual with the origin in its interior. Such an example is illustrated in Figure 2.1(b). Using a counting argument similar to the one in the previous case, but counting the number of closed circuits in the dual instead of the number of open paths, gives us that the percolation probability θ(p) is bounded away from zero if p is sufficiently large. Next, we study the infinite open cluster. Since the existence of an infinite open cluster is independent of the states of any finite collection of edges, the Kolmogorov 0-1 law applies. We can conclude that the probability of the existence of an infinite open cluster is 0 if θ(p) = 0 and is 1 if θ(p) > 0. Unfortunately the 0-1 law does not give any information on the actual number of unbounded open clusters. This is solved in the next Theorem, a special case of the general theorem by Burton and Keane [6], mentioned in Section 1.2.1.
34 o
(a) (b)
Figure 2.1. (a) The solid lattice is Z2 and the dashed lattice is the dual; (b) Each solid line represent an open edge in Z2 while the dashed lines represent a closed edge in the dual. A closed circuit surrounding the origin o is shown.
Theorem 2.2.2. If p is such that θ(p) > 0, then there exists exactly one un- bounded open cluster with probability 1.
The idea behind the proof goes as follows: Let N be the random variable given by the number of unbounded connected components. Observe that N is translation invariant and by ergodicity, it is almost surely constant, i.e., there S exists n ∈ {0, 1,... } {∞} such that P(N = n) = 1. Assume that n∈ / {0, 1, ∞}. Since there is a finite number of unbounded open clusters, there exists a sufficiently large box B intersecting all unbounded components with positive probability. Let E1 = {∂B intersects all n unbounded componets} c and note that E1 is measurable with respect to the state of the edges in B . Then, let E2 = {all edges in B are open} and note that since E2 only de- pends on the state of the edges inside of B, we must have that E1 and E2 are independent. We conclude that \ P(N = 1) ≥ P E1 E2 = P(E1)P(E2) > 0.
The 0-1-law then implies that P(N = 1) = 1, which is a contradiction, since we assumed n∈ / {0, 1, ∞} and thus we must have n ∈ {0, 1, ∞}. It remains to rule out the infinite case, but the proof is quite involved and therefore we will not even sketch it.
2.2.2 Continuum percolation Continuum percolation models rely on placing random objects at random lo- cations in a continuous space. This placing is usually made by using a Poisson
35 point process, which we now describe. A Poisson point process Φ is a random counting measure in a measure space (X, µ) satisfying two conditions. First, for disjoint Borel sets A1,...,An the random variables Φ(A1),..., Φ(An) are independent. Second, the random variable Φ(A) is Poisson distributed with mean µ(A). Take as a first example the Boolean model, explained in Section 1.2.2. We begin by randomly selecting points in the space Rd, representing the centres of the ink drops. This is done by a Poisson point process in Rd with inten- sity measure λ`d, where λ is a real parameter (that we called the intensity in d Section 1.2.2) and `d is the d-dimensional Lebesgue measure in R . More precisely, we have the following relation
n n λ `d(A) (Φ(A) = n) = e−λ`d(A) , P n! for any bounded Borel set A and n ≥ 0. We then define the occupied region C and the vacant region V as
[ d C = B(x, 1) and V = R \C, x∈Φ respectively. The Poisson Boolean model is defined by the pair (Φ, λ) where Φ is a Pois- son point process and λ is the intensity parameter. One can also consider variants with random radii, but we will restrict our analysis to constant and unitary radii. The Poisson Boolean model then partitions the space into two regions, the occupied region (the coloured area) and the vacant region (the blank area), as in Figure 1.10. Furthermore, we denote by Cx the occupied connected component containing the point x. Similarly, we denote the vacant connected component containing the point x by Vx. We say that percolation occurs in the occupied or in the vacant region (alternatively, that C or V per- d colates) when Cx or Vx is unbounded for some x ∈ R , respectively. We then define the critical intensities λc and λv as
λc := inf{λ > 0 : P(C percolates) > 0} and λv := sup{λ > 0 : P(V percolates) > 0}. We can now state a theorem collecting some of the most important basic re- sults.
Theorem 2.2.3. For any d ≥ 2, λc ∈ (0, ∞). Therefore, for λ < λc there is no percolation in C while for λ > λc there is percolation in C. In the same way, λv ∈ (0, ∞) so that for λ < λv there is percolation in V while for λ > λv percolation does not occur in V.
36 As in the discrete case, it is natural to ask how many unbounded connected components there are. This is answered in the next theorem.
Theorem 2.2.4. In the Poisson Boolean model (Φ, λ), there can be at most one unbounded occupied component and at most one unbounded vacant com- ponent, almost surely.
Despite the similarities of the results of the Boolean model and the discrete percolation model, the techniques used to prove such theorems are not the same. For more information, see [24]. 2 In R , it is known that λc = λv. However, for every d ≥ 3, it is conjectured that λc < λv, meaning that the occupied and vacant unbounded connected components can co-exist. The proof of uniqueness for the vacant unbounded component is slightly more complicated than the proof of uniqueness for the occupied unbounded component. The reason being the nature of the vacant set. When studying the occupied set, one can rely on the structure provided by the underlying point process. The vacant set, on the other hand, lacks this structure, since it is defined as the complement of the occupied component. One can also consider the Boolean model in hyperbolic space Hd, studied in [33]. The hyperbolic space consists of the unit open ball in Rd centred in the origin equipped with a metric that assigns Z 1 |γ0(t)|dt L(γ) = 2 2 0 1 − |γ(t)| to the the length of a curve γ : [0, 1] → B(o, 1) and volume Z d dx1...dxd Vol(A) = 2 2 2 , A (1 − |x| ) to a set A. In hyperbolic space, the behaviour of the unbounded connected components is slightly different. Theorem 2.2.3 still holds, but with the addition that λc < λv for all d. Moreover, for λ < λc, there will be a unique vacant component, as before. For λ ∈ (λc, λv), there will be infinitely many unbounded connected components, both in the vacant and occupied region. For λ > λv there will be a unique occupied component again.
2.3 Coverage processes In this section we will discuss some relevant results concerning coverage pro- cesses. The study of coverage processes started in 1939 when Stevens [31] studied the following problem: imagine you have a circle with unit circumfer- ence and n equal arcs of length l < 1. If you start placing these arcs uniformly
37 at random positions on the circle, what is the probability that the circle is fully covered (of course, overlaps are allowed)? More generally, Stevens calculated the probability distribution function of the number of gaps, that is, the parts of the circle not touched by the arcs. Next, we present some modern results concerning the vacant region and cover times.
2.3.1 The vacant region In 1956, Dvoretzky [11] studied a version Stevens’s problem. Dvoretzky con- sidered the covering of the circle C with infinitely many arcs of different sizes.
Let (li)i∈N be a sequence with li < 1 for every i and let Ai represent an arc of length li placed uniformly at random on C. Fix x ∈ C and observe that by the Borel-Cantelli Lemma we have that ∞ X P (x∈ / Ai for all i) > 0 if and only if li < ∞. i=1 P∞ It follows that any fixed x ∈ C is covered with probability 1 if i=1 li = ∞. However, since there are uncountably many points in C, we cannot conclude that P(every x ∈ C is covered) = 1. Indeed, Dvoretzky proved the exis- Pn tence of a sequence (li)i∈N such that i=1 li grows slowly to infinity but P(every x ∈ C is covered) < 1. The next natural problem is then to determine the conditions that the se- quence (li)i∈N must satisfy in order to completely cover C with probability 1. The problem was studied by several mathematicians and only in 1972 Shepp [28] provided the complete solution. Shepp determined necessary and suffi- cient conditions on the sequence of lengths (li)i∈N so that the circle is fully covered almost surely. In the same year, Shepp [29] published another paper where he considered a different but related problem: covering the whole line with randomly placed intervals of random lengths. In this version of the prob- lem, one places intervals of random lengths on the real line using a Poisson process. More precisely, let Φ be a Poisson point process in R × (0, ∞) with intensity measure `1 ×µ, where `1 is Lebesgue measure in R and µ is a locally finite non-negative measure on (0, ∞). To each point (x, y) ∈ Φ we associate S the interval (x, x + y) on the real line and let C = (x,y)∈Φ(x, x + y) be the covered region. Shepp’s [29] main result is then the following:
Theorem 2.3.1. We have that
1, if R 1 exp R ∞(y − x)µ(dy) dx = ∞; (C = ) = 0 x P R 0, otherwise.
We can extend the model to higher dimensions by considering a Poisson d point process Φµ in R × (0, ∞) with intensity measure `d × µ, where `d
38 is Lebesgue measure in Rd. To each point (x, r) ∈ Φ we associate a ball d B(x, r) ⊂ R and define Cµ similarly to C. Shepp’s argument when prov- ing Theorem 2.3.1 was one-dimensional and cannot be extended to d ≥ 2. However, in 2012 Biermé and Estrade made considerable progress attempt- ing to obtain a higher dimensional version of Shepp’s result, by using alto- gether different techniques. In order to explain these, begin by decomposing µ = µH + µL, where µH (dr) = 1(r ∈ (0, 1])µ(dr) and µL(dr) = 1(r ∈
(1, ∞])µ(dr). We can then consider the processes ΦµH and ΦµL separately. The process driven by µH uses only balls of small radii and is known as the high frequency process, whereas the process driven by µL uses only balls of large radii and is known as the low frequency process. We say that a high d frequency coverage occurs when P R = CµH = 1, and a low frequency d coverage occurs when P R = CµL = 1, respectively. One can show that d d d P R = Cµ = max P R = CµH , P R = CµL . In words, if neither high nor low frequency coverage occurs, then there will be no coverage by the combined processes. Thus the problem is reduced to finding conditions on high or low frequency coverage separately. The conditions on low frequency d coverage are straightforward, and one can show that P R = CµL = 1 if R ∞ d and only if 1 r µ(dr) = ∞. As the low frequency behaviour is then fully characterized we shall from now on ignore it and only consider the high fre- quency coverage part and simply assume that µ = µH . The main result by Biermé and Estrade [3] concerning the high frequency coverage is collected in the following.
Theorem 2.3.2. Let vd denote the Lebesgue measure of the d-dimensional unit d ball. If P R = CµH = 1, then
Z 1 Z 1 d−1 d−1 u exp vd r (r − u)µ(dr) du = ∞. (2.2) 0 u On the other hand, if
Z 1 d d lim sup u exp vd (r − u) µ(dr) = ∞, (2.3) u→0 u