Asaf Nachmias’ Lectures in PIMS Probability Summer School 2014 Random Walks on Random Fractals

3

Contents 1. June 3 3 1.1. Random walk interpretation of effective resistance 6 1.2. Network simplifications 7 2. June 5 7 2.1. Infinite network 13 2.2. Nash-William’s inequality 14 3. June 6 14 3.1. Random walk on the infinite incipient cluster 16 4. June 9 17 5. June 10 21 6. June 12 26 7. June 13 32 7.1. Random walks on planar graphs 32 8. June 17 37 9. June 19 40 9.1. Distributional limits 42 10. June 20 42 11. June 23 46 12. June 24 49 13. June 26 50 14. June 27 53

1. June 3 Reference: Chapter 8 of ”Probability on Trees: an introductory climb” by Yuval Peres.

We start with a definition.

Definition 1.1. A network is a finite connected graph endowed with non-negative edge weights {ce}e∈E called conductances. We call the reciprocal re = 1/ce the edge resistances. We have two distinct vertices a and z. We will be considering the process of travelling from a to z. In our studies, harmonic functions would be analogous to voltages.

Definition 1.2. A function h : V → R is harmonic at a x ∈ V if X Cxy h(x) = h(y) π y:y∼x x P where πx = y:y∼x Cxy and y ∼ x ⇒ y is a neighbour of x. Definition 1.3. A voltage is a function V on the vertices that is harmonic on G − {a, z}. The point here is that the voltage function is completely determined by V (a) and V (z).

Lemma 1.4. Let h : V → R be a voltage such that h(a) = h(z) = 0. Then h(x) = 0 for all x ∈ V . Proof. Let it not be so, and let the maximum M > 0 be attained at some point x ∈ G − {a, z}. Then if x1, x2, ..., xm be the neighbouring vertices, we then must have m X Cx,x h(x) = i h(x ) π i i=1 x 4

and so either there exists some i for which h(xi) > h(x) = M or for each i, we have h(xi) = M. The former cannot happen since M is the maximum, and so the latter is the only possibility. But proceeding in this way, we find that every vertex must have voltage M, which contradicts the boundary values.  Existence: Why must such a function exist? Say |V | = n, then we simply have to solve n − 2 linear equations for n − 2 variables (since the boundary values are already given to us).

Alternatively consider the network random walk Xk with transition probability

Cxy px,y = . πx Note that this random walk is reversible, i.e.

πxpx,y = πypy,x for all x, y ∈ V. Put h(x) = Px(Xn visits z before hitting a).

We claim that this is a harmonic function. This is because, say if x1, x2, ..., xm are the neighbours of x then in the very first step after starting from x, the walker will visit some xi, 1 ≤ i ≤ m. Then we should have m X Px(Xn visits z before hitting a) = Px(X1 = xi,Xn visits z before hitting a) i=1 m X = Px(X1 = xi)P (Xn visits z before hitting a|X1 = xi) [Markov property] i=1 m X = px,xi h(xi) i=1 m X Cx,x = i h(x ) π i i=1 x which proves that it is indeed harmonic. Note that h(z) = 1 and h(a) = 0.

In general if you want a voltage with boundary values g(a) and g(z) then put g(x) = g(a) + h(x)[g(z) − g(a)]. Now we shall define current flow which is defined only on directed graphs.

Definition 1.5. A flow from a to z is a function θ : E → R satisfying: (i) θ(x, y) = −θ(y, x); (ii) for all v ∈ V − {a, z}, we have X θ(v, w) = 0 w:w∼v and this is the Kirchoff’s law which states that flow in must equal flow out at any vertex. Given a voltage h, the current flow associated with it is

I(x, y) = [h(y) − h(x)]Cx,y. This has the following properties: (i) antisymmetric; (ii) Kirchoff’s law holds by harmonicity.

To see the latter, note that X X X X I(x, y) = Cx,y[h(y) − h(x)] = h(y)Cx,y − h(x) Cx,y = h(x)πx − h(x)πx = 0 y:y∼x y∼x y∼x y∼x 5 using harmonicity of h.

A current flow satisfies the cycle law if for any cycle {e1, e2, ..., em} we have m X rei I(ei) = 0. i=1

To see this, let the cycle be given by the vertices v1, v2, ..., vm where vm+1 = v1 and ei = vi → vi+1. Then note that m m X X 1 r I(e ) = [h(v ) − h(v )]C ei i C i+1 i vi,vi+1 i=1 i=1 ei m X = [h(vi+1) − h(vi)] i=1 =0 From now on, all voltages have h(z) = 0.

Definition 1.6. The strength of a flow θ is X ||θ|| = − θ(a, x) x:x∼a (there has been a little confusion about the sign of the summation on the right hand side, but it’s minor). Lemma 1.7. If θ is a flow from a to z satisfying the cycle law, and I is the current flow of a voltage h with h(z) = 0, and ||θ|| = ||I||, then θ = I.

Proof. The function J = θ − I : E → R is a flow satisfying the cycle law. This is because both θ and I satisfy the cycle law, and the cycle law expression is linear. Define n X g(x) = J(ei)rei i=1 where e1, e2, ..., em is a path from a to x. Clearly g is well-defined due to the cycle law being satisfied. Now we have to check that it is harmonic. For this we use Kirchoff’s law. Let w1, w2, ..., wk be neighbours of v ∈ V − {a}, then if we consider the path P from a to v, then a path from a to wi would be P ∪ (v, wi) for all 1 ≤ i ≤ k. Hence we have X g(wi) = J(e)re + J(v, wi)r(v,wi), e∈P and hence we get

k k X Cv,wi X Cv,wi X g(wi) = [ J(e)re + J(v, wi)r(v,wi)] πv πv i=1 i=1 e∈P k k X X Cv,wi X Cv,wi = J(e)re[ ] + J(v, wi)r(v,wi) πv πv e∈P i=1 i=1 k k X 1 X X = J(e)re + J(v, wi) [as Cv,wi = πv and Cv,wi .r(v,wi) = 1] πv e∈P i=1 i=1 X 1 X = J(e)re + J(v, wi) πv e∈P w:w∼v =g(v) + 0 [ by Kirchoff’s law which holds for J] 6

and thus g harmonic. Flow out of a is 0, since we have defined g along paths starting from a. Thus clearly g is harmonic at a too. This is because, if x1, x2, ..., xm are neighbours of a, then

g(xi) = J(ei)rei where ei = (a, xi). Then m m X Ce X Ce i g(x ) = i J(e )r π i π i ei i=1 a i=1 a m 1 X = J(e ) π i a i=1 m m 1 X X = [ θ(e ) − I(e )] π i i a i=1 i=1 1 = [||I|| − ||θ||] πa =0 since ||θ|| = ||I||. Hence it is harmonic on entire V (G), and hence it must be constant everywhere, since g(a) = 0 [note that the conductances are all nonnegative, hence if g is harmonic at a too, then the value of g on all its neighbours must be 0, and this keeps on progressing]. This proves that J = 0.  Lemma 1.8. Gievn a network, the ratio Va − Vz ||I|| is independent of the choice of the voltage function V , where I is the current flow induced by V .

Proof. We have already assumed that Vz = 0. And we know that the voltage values on the intermediate vertices are completely determined (using harmonicity) by the boundary values Va and Vz (or rather, in this case, just by Va, so that if instead of Va we consider c.Va, then every value on the intermediate vertices will get multiplied by c, and the ratio would remain unchanged. Hence proved.  Definition 1.9. The effective resistance between a and z is given by V − V R (a ↔ z) = a z . eff ||I|| Here h(a) ≥ 0. 1.1. Random walk interpretation of effective resistance. Firstly, note that for any x ∈ V ,

Va − Vx qx = Px( hit z before a) = . Va − Vz The reasoning behind this is as follows:

When we consider starting from a, the probability of this event is clearly 0, and when we start from z, it is obviously 1. Now let us start at some x ∈ V − {a, z}. Then clearly, in the very first step of the walk starting from x, the walker will go to one of the neighbours of x, hence we can write

X Px( hit z before a) = Px(X1 = y, hit z before a) y:x∼y X = P [X1 = y|X0 = x]P [ hit z before a|X1 = y] y:x∼y

X Cxy = P [ hit z before a] π y y:x∼y x 7

X Cxy = q π y y:x∼y x which means X Cxy q = q x π y y:x∼y x for all x ∈ V − {a, z}, and hence q is harmonic on V − {a, z}, with boundary values qa = 0 and qz = 1. But the voltage function V is also harmonic on V − {a, z}, with Va and Vz. By the uniqueness of harmonic functions on V − {a, z}, given the boundary values, we can conclude that q must be a translated and scaled version of V , and keeping consistency with the boundary values, we get

Va − Vx qx = . Va − Vz Now we start from a and consider hitting z before returning to a. We have

X Ca,x P ( hit z before a) = P ( hit a before z) a π x x:x∼a a X Ca,x Va − Vx = π V − V x:x∼a a a z 1 X Ca,x = (V − V ) π (V − V ) π a x a a z x:x∼a a 1 = (||I||) πa(Va − Vz) 1 = πaReff (a ↔ z) 1.2. Network simplifications. Parallel law: Conductances add in parallel. By this we mean: if we have only two vertices a and z in the graph, and there are two parallel paths connecting a and z with conductances c1 and c2. Then we calculate Pa( hit z before returning to a). For this, we need to calculate V − V R (a ↔ z) = a z . eff ||I|| Now X ||I|| = − Ia,x = −[(Vz − Va)c1 + (Vz − Va)c2] = (Va − Vz)(c1 + c2) x:x∼a and hence 1 Reff (a ↔ z) = . c1 + c2 Clearly the network behaves in the same way (i.e. has the same value for Pa( hit z before returning to a)) as when we consider a and z connected by a single edge of conductance c1 + c2.

Series law: Resistances add in series. Similar reasoning.

Gluing: Glue vertices that have the same voltage.

2. June 5 V (a) = 0,V (z) ≥ 0. Also I(x → y) = Cxy[V (y) − V (x)] and X ||I|| = I(a → x) ≥ 0. x:x∼a 8

Effective resistance is V (z) − V (a) R (a ↔ z) = . eff ||I|| There could be a slight problem with signs.

Corrections: We define

X Cxa Vx − Va 1 ||I|| 1 P ( hit z before returning to a ) = P (τ < τ ) = = = , a a z a π V − V π V − V π R (a ↔ z) x a z a a z a a eff where τz is the hitting time of z, and similarly for a.

The commute time identity:

Theorem 2.1. Let τx = min{n ≥ 0 : Xn = x}. Then show that X Ea(τz) + Ez(τa) = Reff (a ↔ z). πx. x

If G is simple, and Cxy = 1 for all x, y, then

Ea(τz) + Ez(τa) = 2|E|.Reff (a ↔ z). This is because for every x, X πx = deg(x) = degree of x and deg(x) = 2|E|. x∈V Proof. Comsider the Green’s function

Gz(a, x) = Ea[ visits to x strictly before τz]. Then show that X Ea(τz) = Gz(a, x). x

∞ X X X Gz(a, x) = Ea[ 1n<τz ,Xn=x] x x n=1 ∞ X X = Ea[ 1n<τz ,Xn=x] n=1 x ∞ X = Ea[1n<τz ] n=1 ∞ X =Ea[ 1n<τz ] n=1

=Ea[τz] Hence proved.

Check that G (a, x) h(x) = z π(x) is harmonic on G − {a, z}. To see this, note that if x has neighbours x1, x2, ..., xm then we have m m X Cx,x X Cx,x Gz(a, xi) i h(x ) = i π i π π i=1 x i=1 x xi 9 m ∞ X Cx,x 1 X = i E [ 1 ] π π a n<τz ,Xn=xi i=1 x xi n=1 ∞ m 1 X X Cx,x = E [ i 1 ] π a π n<τz ,Xn=xi x n=1 i=1 xi ∞ 1 X = E [ 1 ] π a n+1<τz ,Xn+1=x x n=1 1 = Ea[ visits to x strictly before τz] πx G (a, x) = z πx

Note that, when we are at xi, the probability that in the next step we would go to x is clearly Cx,xi /πxi , and also since the only neighbours of x are xi, 1 ≤ i ≤ m, hence before the particle reaches x, it must have been in one of the xis. Thus when x 6= a, z, we have m X Ea[1n+1<τz ,Xn+1=x] =Ea[ 1n+1<τz ,Xn=xi,Xn+1=x] i=1 m X = P [n + 1 < τz,Xn = xi,Xn+1 = x] i=1 m X = P [Xn+1 = x|Xn = xi]P [n < τz,Xn = xi] i=1 m X Cx,x = i E [1 ] π a n<τz ,Xn=xi i=1 xi m X Cx,x =E [ i 1 ] a π n<τz ,Xn=xi i=1 xi Here h(z) = 0 and when calculating h(a), we just look at 1 Pa( hit z before returning to a ) = , πaReff (a ↔ z) and h(a) is the expectation of a geometric random variable (excursions made by the walker either end at a or at z) and so π .R (a ↔ z) h(a) = a eff . π(a) So the corresponding current will have ||I|| = 1 (unit current flow).

Let me explain the meaning of this paragraph: When we start at a and count the number of times we return to a before hitting z, we are considering an excursion which either ends in a or in z, and ending in z is a success, and each excursion is like a Bernoulli random variable and they are all i.i.d. because every time we come back to a, it is as if we start afresh. Thus we count the number of failures in independent Bernoulli trials before the first success, and this is a geometric random variable. The success probability for these Bernoulli random variables is 1 Pa[τz < τa] = πaReff (a ↔ z) so that the expectation of the geometric random variable is 1 = 1 = πaReff (a ↔ z). πaReff (a↔z) 10

Hence πaReff (a ↔ z) h(a) = = Reff (a ↔ z). πa X ˜ Ez(τa) = πxh(x) x where h˜(x) = h(a) − h(x). Reason: As we have argued in the previous case, if we define

Ga(z, x) = Ez[ visits to x strictly before τa], then clearly again X Ez(τa) = Ga(z, x) x X Ga(z, x) = π x π x x Now we know that

Ga(z, a) = 0, and Ga(z, z)/πz = Reff (a ↔ z) ˜ where the second one follows from the same argument as before. Thus h and Ga(z, x)/πx have the same boundary values on a and z and they are both harmonic on V − {a, z}, hence by uniqueness they are equal. Thus we have X ˜ X Ez(τa) = πxh(x) = πx[h(a) − h(x)]. x x Thus adding them we get X X Ea(τz) + Ez(τa) = h(a). πx = Reff (a ↔ z). πx. x x  Theorem 2.2 (Thompson’s principle).

Reff (a ↔ z) = inf{(θ): θ a flow from a to z, ||θ|| = 1} where X 2 (θ) = re[θ(e)] e∈E is the energy of the flow. Proof. We are taking infimum of some (which is the energy) over a compact set [compact set because the conditions that make θ a flow are all continuous, and also ||θ|| is a continuous, hence {θ : ||θ|| = 1} is going to be a closed and bounded set] hence there exists a θ with ||θ|| = 1, which minimizes (θ). Enough to show that this minimizer θ has unit strength and its energy is the effective resistance. First we need to show that the cycle law holds. Let e1, e2, ..., em be a cycle. Consider the flow

γ(ei) = 1, 1 ≤ i ≤ m, γ(e) = 0 for all e 6= ei, 1 ≤ i ≤ m. Note that ||γ|| = 0 since we can have the following two scenarios. If this cycle does not contain a then be definition of strength, ||γ|| = 0. If it does contain a, then unit current leaves a through one edge and comes back in through another, hence the strength is again 0. Let δ 6= 0. Consider the current (θ + δγ). Note that this will have strength 1 since strength is a linear function of current and ||θ|| = 1, ||γ|| = 0. Because θ is the minimizer among all currents of unit strength of the energy, hence (θ + δγ) − (θ) ≥ 0, 11 and this means X 2 X 2 (θ + δγ) − (θ) = re[θ(e) + δγ(e)] − reθ(e) e e 2 X 2 X =δ reγ(e) + 2δ reθ(e)γ(e) e e so that when δ > 0, we get X 2 X δ reγ(e) + 2 reθ(e)γ(e) ≥ 0 e e and so letting δ → 0 we have X reθ(e)γ(e) ≥ 0, e but on the other hand, when δ < 0, we get X 2 X δ reγ(e) + 2 reθ(e)γ(e) ≤ 0 e e because we are cancelling a factor of δ, which is negative, from both sides, and letting δ → 0 we now get X reθ(e)γ(e) ≤ 0, e which means X reθ(e)γ(e) = 0. e But we know that the only non-zero values taken by γ is on the cycle, hence we get m m X X rei θ(ei)γ(ei) = rei θ(ei) = 0. i=1 i=1 Hence cycle law holds for this minimizer.

Let us now show that the energy is indeed equal to the effective resistance. We know that any current flow is induced by some voltage function, so let that be V in this case. X 2 (θ) = reθ(e) e 1 X X = r [C (V − V )]2 2 x,y x,y y x x y  2 1 X X Vy − Vx = r 2 x,y r x y x,y 1 X X = C [V − V ]2 2 x,y y x x y 1 X X = [V − V ]θ(x, y) 2 y x x y where the last line follows from the definition of current θ induced by voltage V . Now we sum this over separately for x and y. We get X X X X Vyθ(x, y) = Vyθ(x, y) x y y x:y∼x X X = Vy θ(x, y) y x:y∼x

=Vz − Va 12 P since by Kirchoff’s law, for any y 6= a, z we know that x:y∼x θ(x, y) = 0, and when y = a we have P P x:a∼x θ(x, a) = −||θ|| = −1 and similarly when y = z, we have x:z∼x θ(x, z) = ||θ|| = 1. The other sum will be −(Vz − Va) through exactly the same argument, so that subtracting the latter from the former and dividing by 2 we have Vz − Va = Reff (a ↔ z) since unit strength.  Remark 2.3. The resistance between two different points on the graph where each edge has unit resistance is at most the length of the shortest path between them (note that resistances add in series. Corollaries from Thompson’s principle:

0 Lemma 2.4 (Rayleigh’s monotonicity). If {re} and {re} are two resistances on the same graph such that 0 for each e ∈ E, we have re ≤ re, then 0 Reff (a ↔ z; {re}) ≤ Reff (a ↔ z; {re}). Proof. By Thompson’s principle we know we can get an optimum flow θ0 with unit strength corresponding 0 0 to {re} which minimizes the energy. But because for all e, we have re ≤ re, hence we have X 0 2 X 0 0 2 0 0 0 reθ (e) ≤ reθ (e) ⇒ (θ , {re}) ≤ (θ , {re}). e e

Now let the optimum flow corresponding to {re} be θ, so then clearly 0 (θ, {re}) ≤ (θ , {re}) which in turn shows that 0 0 0 (θ, {re}) ≤ (θ , {re}) ⇒ Reff (a ↔ z; {re}) ≤ Reff (a ↔ z; {re}).  Remark 2.5. From the above lemma it is clear that if we add an edge, then since that means that we are reducing the resistance of the previously non-existent edge from ∞ to a finite positive number, clearly by Rayleigh’s monotonicity the effective resistance will be reduced. This means that [and I did not understand the connection] removing an edge from a recurrent graph still leaves it recurrent. Lemma 2.6 (Discrete Dirichlet’s principle). 1 = inf{(h): h : V → R, h(a) = 1, h(z) = 0} Reff (a ↔ z) where X 2 (h) = Ce[h(x) − h(y)] . e=(x,y) Proof. The unique minimizer h is the harmonic function with 0, 1 boundary values. It is unique because any function which is harmonic on G − {a, z} is completely determined by the values on a and z. Let

V (x) = h(x)Reff (a ↔ z). If I is the flow induced by V , then it will have unit strength because X X X ||I|| = − I(a, x) = Ca,x(Va − Vx) = Reff (a ↔ z) Ca,x(h(a) − h(x)) = Reff (a ↔ z)||Ih|| x:a∼x x∼a x∼a

where Ih is the current induced by h. Now h(a) − h(z) 1 − 0 1 Reff (a ↔ z) = = = ⇒ Reff (a ↔ z)||Ih|| = 1. ||Ih|| ||Ih|| ||Ih|| Then

Reff (a ↔ z) =(I) 13

X 2 = reI(e) e X 2 = rx,y{Cx,y(Vy − Vx)} x,y X 2 = Cx,y(Vy − Vx) x,y X 2 2 = Cx,yReff (a ↔ z) (h(y) − h(x)) x,y 2 =Reff (a ↔ z) (h) and hence we conclude 1 (h) = . Reff (a ↔ z)  2.1. Infinite network. Let G be an infinite graph and a is a vertex. Degree of every vertex is assumed finite, that means that πx < ∞ for all x ∈ V (G). The goal is to define a resistance between a and ∞. Let ∞ G1 ⊆ G2 ⊆ G3... be an exhaustion of G. Then ∪n=1Gn = G, and each Gn is finite and connected. Let for every n, all vertices of G − Gn be replaced by a single vertex zn. Then we define

Reff (a ↔ ∞) = lim Reff (a ↔ zn; Gn ∪ {zn}). n→∞ Note that the effective resistance will be monotonically increasing due to Rayleigh’s monotonicity. Hence the limit exists. In fact the limit is also independent of the choice of the exhaustion {Gn} of G. This can be seen as follows:

Let {Gn} and {Hn} be two such exhaustions. Then clearly, for every n ∈ N, there exists some m ∈ N such that Gn ∪ {zn} ⊆ Hm and so by Rayleigh’s monotonicity we have

Reff (a ↔ z; Hm ∪ {wm}) ≥ Reff (a ↔ z; Gn ∪ {zn})

and similarly we can find some k such that Hm ∪ {wm} ⊆ Gk and we have

Reff (a ↔ z; Gk ∪ {zk}) ≥ Reff (a ↔ z; Hm ∪ {wm}). Hence by sandwiching we get the same limit.

Theorem 2.7. The following are equivalent: (i) The weighted random walk on this graph is transient. (ii) There exists some vertex a such that Reff (a ↔ ∞) < ∞ (and note that if one vertex satisfies this condition, then all the others do too). (iii) There exists a flow θ from a to ∞ with finite energy, i.e. ||θ|| > 0, (θ) < ∞. The idea here is that since for finite z, we have 1 Reff (a ↔ z) = , πaPa(τz < τa) hence when we have

Reff (a ↔ ∞) < ∞ ⇒ lim Pa(τz < τa) > 0 n→∞ n which means they all stay bounded away from 0, and so can’t be too small.Hence there indeed exists a path from a which will go off to infinity. 14

2.2. Nash-William’s inequality.

Theorem 2.8. If {πn} are disjoint edge cutsets separating a from z, i.e. these are the sets of edges that one must traverse in order to reach z from a. Then X X −1 Reff (a ↔ z) ≥ { Ce} .

n e∈πn

Proof. Note that if there exists {πn} such that X X −1 { Ce} = ∞

n e∈πn then the graph is recurrent from the previous theorem. The following is only a sketch of the proof:

Split the resistances equally in parallel network, and if nested use monotonicity and wherever possible, glue them together.

Let θ be a unit flow from a to z. Then by Cauchy-Schwarz inequality, X X 2 X p √ 2 [ Ce][ reθe ] ≥( Ce re|θ(e)|) e∈πn e∈πn e∈πn X =( |θ(e)|)2

e∈πn X ≥( θ(e))2

e∈πn X =( θ(a, x))2 x:a∼x =[||θ||]2 =1 and so we have X 2 X X 2 X 1 reθ(e) ≥ reθ(e) ≥ P . Ce e n e∈πn n e∈πn Now for optimum θ, we know that X 2 (θ) = reθ(e) = Reff (a ↔ z) e and hence in that case X X −1 Reff (a ↔ z) ≥ ( Ce) .

n e∈πn  3. June 6 Random walk on high-dimensional percolation clusters:

Much of the arguments will be resembling those for branching processes.. We will consider Zd with d ≥ 19. Let Ψ(p) be the probability that there exists an infinite connected component in Gp, where G is an infinite graph and Gp is obtained by keeping each edge with probability p and discarding it with probability 1 − p, independent of all other edges.

Clearly Ψ(p) is an increasing function of p with Ψ(0) = 0 and Ψ(1) = 1. Also, Ψ(p) ∈ {0, 1} for all 0 ≤ p ≤ 1 by Kolmogorov’s 0 − 1 law. Here’s my idea as to how this works out: 15

pc = inf{p ∈ [0, 1] : Ψ(p) = 1}.

Thus when we consider the graph in the critical regime, we keep any edge open with probability pc and close it with probability 1 − pc. We denote {x ↔ y} the event that there exists an open path between x and y. In high dimension, critical percolation behaves like critical branching process. (Also, he mentioned percolation on a regular tree). We expect critical exponents to be the same, and at critical exponent there will be polynomial decay.

On a tree, critical percolation probability = 1/2. Have to find out why. This means that

Pp[there exists infinite cluster ] = 0 if p < 1/2; and Pp[there exists infinite cluster ] = 1 if p > 1/2. One way to see this is that the progeny distribution is really Bin(2, p), and we know the basic theory of branching process, i.e. if µ = expected progeny size, then µ > 1 implies that the probability of surviving forever is 1, and µ < 1 means no infinite cluster. From that we can conclude. Complete.

We shall consider critical Galton-Watson tree with pc = 1/2, and Ψ(pc) = 0. Homework: We have d pc > 1/2d when looking at Z . What remains the same in any vertex-transitive graph is that Ψ(pc) = 0 (couldn’t understand this).

Let Ppc be the Bernoulli product measure on the infinite graph, and Zr be the number of progenies in generation r. Then clearly

Ppc (Zr > 0) → 0 as r → ∞ ⇒ no infinite cluster. We shall find that c P (Z > 0) ∼ polynomial decay with exponent 1. pc r r However, when T is the total number of progeny of the original particle, then c P (T ≥ n) ∼ √ . pc n

We can use the concept of a random walk to prove this: Xn is a ±1 random walk with mean 0, and we consider Pp(Xn stays above 0 in the first n steps ). May have to consider a skewed version of the random walk. We need Lace expansion results for higher dimension.

Theorem 3.1 (Triangle condition: Hara and Slade, ’90). P = Ppc , d ≥ 19. Then X P (0 ↔ x)P (x ↔ y)P (y ↔ 0) < ∞. d x,y∈Z

This theorem implies Ψ(pc) = 0 since if we take y = 0, then we have an even smaller sum X X P (0 ↔ x)P (x ↔ 0)P (0 ↔ 0) < ∞ ⇒ P (0 ↔ x)2 < ∞ d d x∈Z x∈Z and here note that we indeed have X P (0 ↔ x) = ∞ d x∈Z since we know from a standard theorem that the expected cluster size is ∞ when p = pc. But from the above theorem we can now conclude that P (0 ↔ x) → 0 as x → ∞. 16

This shows that there can be no infinite cluster, since the further x is from 0, the less the chance that we can find a path between 0 and x.

Note that P (there exists two infinite disjoint clusters) = 0 since it will be far more to keep two such infinite paths open.

Theorem 3.2 (Theorem B: Aizenman, Barsky ’91). The previous triangle condition holds and d > 6. Then c P (|C(0)| ≥ n) ∼ √ pc n where d C(0) = {x ∈ Z : 0 ↔ x} is the cluster at the origin. Theorem 3.3 (Theorem C: Hara, v.d. Hofstad, Slade ’03, Hara ’08). 2−d Ppc (x ↔ y) = (1 + o(1))C.(||x − y||2) . 3.1. Random walk on the infinite incipient cluster. Theorem 3.4 (Theorem D: v.d. Hofstad, Zarai ’04). Condition on the event that {0 ↔ x}, and note that this event indeed has probability > 0. Take |x| → ∞. Then the measures will converge. That is, take any cylindrical set F (one which depends on only finitely many edges of the graph), say

F = { edges e1, e2, ..., e5 are open }. Then

Ppc (F |{0 ↔ x}) actually has a limit as |x| → ∞ and this limit is independent of which way we take |x| to infinity. And we refer to this conditional probability as PIIC (F ) which is clearly a valid measure. It is a measure supported on connected infinite graphs containing the origin. Theorem 3.5 (Theorem E: Kozima, N.). Let d Sr = {x ∈ Z : ||x||2 ≥ r}. Then −2 Ppc (0 ↔ Sr) ∼ r when d > 6, which means that the critical exponent is −2. The lower bound can be shown using second moment method.

Intrinsic metric critical exponents:

{0 ↔r x} = {0 is connected to x in an open path of length ≤ r}. {0 ↔=r x} = {0 is connected to x by an open path and the shortest such path is of length = r}. Then we define in this metric a ball of radius r as B(0, r) = {x : 0 ↔r x} and its boundary δB(0, r) = {x : 0 ↔=r x}. 17

Theorem 3.6 (Theorem 1). 1)

Epc (|B(0, r)|) = r note that this is unconditioned on the survival, and also this behaviour is pretty similar to that on a tree. 2) 1 P (δB(0, r) 6= φ) ∼ . pc r We shall use this theorem to prove the following one. Theorem 3.7 (Theorem 2). For all ε > 0, there exists C > 0 such that for all r > 0,

−1 2 2 −1 PIIC (∀r, C r ≤ |B(0, r)| ≤ Cr ,C r ≤ Reff (0, δB(0, r)) ≤ r) ≥ 1 − ε. An even stronger statement is that C−1r P [∃C : C−1r2(log r)−C ≤ |B(0, r)| ≤ Cr2(log r)C , ≤ R ≤ r] = 1. IIC (log r)C eff And this we derive using Borel Cantelli lemma. Theorem 3.8 (Theorem 3). For all ε > 0, there exists C > 0 such that for all r > 0,

−1 C 3 C P (C−1r3 ≤ E (τ ) ≤ Cr3, ≤ P r (0, 0) ≤ ) ≥ 1 − ε. IIC SRW r r2 SRW r2 Here

τr = min{t ≥ 0 : Xt ∈ δB(0, r)}. This we can prove using theorem 2 and commute time identity.

4. June 9 Theorem 4.1 (B-K inequality: v.d. Berg, Kesten ’85). On any graph G and any p ∈ [0, 1] we have

r l r l Pp({x ↔ y} ◦ {w ↔ z}) ≤ Pp({x ↔ y})Pp({w ↔ z}). Here the event {x ↔r y} ◦ {w ↔l z} means there exist two edge disjoint open paths of length at most r and l respectively connecting x to y and z to w. We shall prove the following lemma:

Lemma 4.2 (Lemma 1). 2 2−d Epc [|B(0, r)|.10↔x] ≤ C.r .|x| , where |x| ≥ 2r. Now we already know that from Theorem C above that

2−d 2−d Ppc (x ↔ y) ∼ |x − y| ⇒ Ppc (0 ↔ x) ∼ |x| . Therefore we can immediately sense that the first step would be to condition on the event {0 ↔ x}, so that we are left to prove that 2 Epc [|B(0, r)||0 ↔ x] = EIIC (|B(0, r)|) ≤ Cr . 18

Proof. X r Epc [|B(0, r)|.10↔x] =Epc [ 10↔ y10↔x] d y∈Z X r = Ppc [0 ↔ y, 0 ↔ x] d y∈Z Now note that (can’t draw the figure) the two paths between 0 and x and 0 and y may intersect each other, and let z be the last time (starting from 0) that they intersect. Then there exist two disjoint paths between z and y and z and x, and both these paths are also disjoint from another path which surely exists between 0 and z (think of a river 0 → z → y, with one distributary z → x). Because we know that there exists a path which is of length at most r between 0 and y, hence we can say that 0 ↔r z and z ↔r y. Thus we can split up the above event as

X r X r r Ppc [0 ↔ y, 0 ↔ x] ≤ Ppc [{0 ↔ z} ◦ {z ↔ y} ◦ {x ↔ z}] d y∈Z y,z X r r ≤ Ppc [{0 ↔ z}]Ppc [{z ↔ y}]Ppc [{x ↔ z}] by B-K inequality; y,z X r r = Ppc [{0 ↔ z}]Ppc [{0 ↔ y − z}]Ppc [{x − z ↔ 0}] by translation invariance; y,z X r r = Ppc [{0 ↔ z}]Ppc [{0 ↔ w}]Ppc [{x − z ↔ 0}] where w = y − z; w,z Now note that since we are asking for 0 ↔r z, hence z couldn’t possibly be too large, and |x| ≥ 2r, consequently we can find a constant C large enough such that 2−d 2−d Ppc [{x − z ↔ 0}] ∼ |x − z| ≤ C.|x| . Hence we finally have

X r r 2−d X r r Ppc [{0 ↔ z}]Ppc [{0 ↔ w}]Ppc [{x − z ↔ 0}] ≤C|x| Ppc [{0 ↔ z}]Ppc [{0 ↔ w}] w,z w,z 2−d X r X r =C|x| Ppc [{0 ↔ w}] Ppc [{0 ↔ z}] w z 2−d X 2 r =C|x| [Epc ( 10↔ w)] w 2−d 2 =C|x| [Epc (|B(0, r)|)] 2−d 2 ≤C1|x| r by theorem 1 (a)  Lemma 4.3 (Lemma 2). 2 2−d Ppc (|B(0, r)| ≤ εr , 0 ↔ x) ≤ C.ε|X| , for |x| ≥ 2r. Proof. Remark made by Professor: what we are going to do is perform a breadth-first search.

Note that if |B(0, r)| ≤ εr2, then since r X |δB(0, j)| = |B(0, r)| j=0 we must have some j ∈ [r/2, r] such that |δB(0, j)| ≤ 2εr 19 since otherwise r X |δB(0, j)| > 2εr ⇒ |δB(0, j)| > 2εr × r/2 = εr2 ⇒ |B(0, r)| > εr2. j=r/2 If in addition we have 0 ↔ x, then there must exist v ∈ δB(0, j) such that v ↔ x outside B(0, j), since the path connecting 0 and x must cut δB(0, j) at some point v (since x is more than 2r distance away from 0) and there will be at least one path which will then continue outward from v till it reaches x, without coming back inside the ball B(0, j). Let j∗ be the minimal such j. Then we condition on B(0, j∗) not being empty.

∗ X ∗ ∗ Ppc (0 ↔ x|B(0, j )) = Ppc [0 ↔ v, v ↔ x off B(0, j )|B(0, j )] v∈δB(0,j∗) X ∗ ∗ ≤ Ppc [v ↔ x off B(0, j )|B(0, j )] not sure of this inequality step v∈δB(0,j∗) X ∗ = Ppc [v ↔ x off B(0, j )] v∈δB(0,j∗) X ≤ Ppc [v ↔ x] v∈δB(0,j∗) X ∼ |x − v|2−d v∈δB(0,j∗) X ≤C|x|2−d 1 v∈δB(0,j∗) =C|x|2−d|δB(0, j∗)| 2−d ≤C1εr|x| This implies that r P [|B(0, r)| ≤ εr2, 0 ↔ x] ≤P [{ there exists j∗ ∈ [ , r]: |δB(0, j∗)| ≤ 2εr}; 0 ↔ x] pc pc 2 X = P [B(0, j∗) = A]P [0 ↔ x|B(0, j∗) = A] A Now 2 2−d Ppc [|B(0, r)| ≤ εr , 0 ↔ x] ≤ Ppc [δB(0, r/2) 6= φ]Cεr|x| by theorem 1 (b). One explanation could be as follows:

2 X ∗ 2 ∗ Ppc [|B(0, r)| ≤ εr , 0 ↔ x] = P [B(0, j ) = A]P [|B(0, r)| ≤ εr , 0 ↔ x|B(0, j ) = A] A X X ≤ P [B(0, j∗) = A] P [v ↔ x off B(0, j∗)|B(0, j∗) = A] A:δA6=φ v∈δB(0,j∗) X X = P [B(0, j∗) = A] P [v ↔ x] A:δA6=φ v∈δB(0,j∗) X X ∼ P [B(0, j∗) = A] |x − v|2−d A:δA6=φ v∈δB(0,j∗) X X ≤ P [B(0, j∗) = A] C.|x|2−d as |x| ≥ 2r, |v| ≤ r/2; A:δA6=φ v∈δB(0,j∗) X =C.|x|2−d P [B(0, j∗) = A]|δB(0, j∗)| A:δA6=φ 20 X ≤C.|x|2−d P [B(0, j∗) = A]2εr A:δA6=φ =C.ε.r.|x|2−dP [δB(0, j∗) 6= φ] ≤C.ε.r.|x|2−dP [δB(0, r/2) 6= φ] as j∗ ≥ r/2; ≤C.ε.r.|x|2−dΓ(r/2) C ≤C.ε.r.|x|2−d. 1 r/2 =C.ε.|x|2−d Hence proved.  We wish to prove that −1 c PIIC (Reff (0, δB(0, r)) ≤ A r) ≤ √ . A Lemma 4.4 (Lemma 3).

−1 C 2−d Pp [Reff (0, δB(0, r)) ≤ A r, 0 ↔ x] ≤ √ |x| . c A Proof. We shall use Nash William’s inequality. We know that X 1 X 1 Reff (a ↔ z) ≥ P = Ce |πn| n e∈πn n since here each edge has conductance 1, and πn are the disjoint cut sets.

Let j ∈ [r/4, r/2] such that 0 < |δB(0, j)| < 4εr, since |B(0, r)| ≤ εr2. Let

Lj = no. of edges between δB(0, j−1) and δB(0, j) that reach δB(0, r) in an open path not returning to B(0, j). Our idea will be to condition on B(0, j) and take some edge between δB(0, j − 1) and δB(0, j). Let v be the end point at δB(0, j). Then we bound

Ppc [δB(v, r/2) 6= φ|B(0, j)]. This event is monotone with respect to adding edges. By Theorem 1 (b), C Γ(r) = sup P G(δB(0, r) 6= φ) ≤ . d r G⊆E(Z ) This can bound C P [δB(v, r/2) 6= φ|B(0, j)] ≤ . r I am not sure how this helps. Let’s see:

E[Lj|B(0, j) = A] =E[# edges between δB(0, j − 1) and δB(0, j), reaching δB(0, r) off B(0, j)|B(0, j) = A] X =E[ 1 edges between δB(0,j−1) and v, reaching δB(0,r) off B(0,j)|B(0, j) = A] v∈δA X ≤E[ 1v↔B(0,r) off B(0,j)|B(0, j) = A] v∈δA X = P [v ↔ B(0, r) off B(0, j)|B(0, j) = A] v∈δA X = P [v ↔ B(0, r) off B(0, j)] v∈δA 21 X ≤ P [v ↔ B(0, r)] v∈δA X ≤ P [δB(v, r/2) 6= φ], |v| ≤ r/2 ⇒ textdistancebetweenv and B(0, r) ≥ r/2; v∈δA X ≤ Γ(r/2) v∈δA C =|δA|. r/2 C =|δB(0, j)|. r and thus taking expectation again we get C E[L ] ≤ E[|δB(0, j)|]. j r Then X C X C E[ L ] ≤ E[ |δB(0, j)|] = E[|B(0, r/2)|] ≤ C j r r j∈[0,r/2] j∈[0,r/2] using Theorem 1 (a).

5. June 10 Our first goal is to prove Theorem 1 (a), i.e.

Epc (|B(0, r)|) ≤ Cr. Let us define

G(r) = Epc (|B(0, r)|). Then we claim that for all r, there exists a constant C > 0 independent of r such that cG2(r) G(2r) ≥ . r The intuitio behind this inequaity is as follows: Let 0 be connected to a point y by a path of length at most 2r. Then there is at least one x such that 0 ↔r x and x ↔r y. Now what we have to show is that when we sum over x and y, the sum has to be at least G2(r)/r. Note that by the B-K inequality it is immediate that the upper bound for the sum is G2(r). Also, note that for a typical y which is connected to 0 by a path of length strictly less than 2r there could be several such points x in between them. We do not wish to consider x’s that are too far away.

Definition 5.1. (x, y) is said to be a k-overcounted pair if there exist u, v ∈ Zd such that |u−x| ≥ k, |v−x| ≥ k and {0 ↔r u} ◦ {v ↔r y} ◦ {u ↔ x} ◦ {v ↔ x} ◦ {u ↔ v}. Note that we need in this event a separate disjoint path connecting u and v. Let N(k) = {(x, y): {0 ↔r x} ◦ {x ↔ y}, (x, y) not k − overcounted}. Lemma 5.2. |B(0, 2r)|rCkd ≥ |N(k)|. 22

Proof. What we do is, we consider the pathP of length ≤ 2r between 0 and y, and we draw a tube of radius k around it (meaning we thicken the path into a strip with two parallel curves on either side which are each k distance away from the original curve). Now we claim that if x (which is connected to 0 and y) is to lie outside this tube, then (x, y) would be k-overcounted. This is because (I could not draw a diagram), if an x which is connected to both 0 and y by paths of legths ≤ r is present outside the tube, then let the path P1 between 0 and x intersect the path P between 0 and y at u, and the path P2 between x and y intersect the path between 0 and y at v. Then clearly since u and v lie on the path P connecting 0 and y and x lies outside the tube of thickness k encasing that same path P , clearly u and v must each be at distance ≥ k away from x, i.e. |u − x| ≥ k, |v − x|. Also, the events {u ↔ x} ◦ {v ↔ x} automatically hold, and disjoint r of the path P , because these hold because of P1 and P2. Also since 0 ↔ x and u is an intermediate point on that path, hence we must have 0 ↔r u. Similarly v ↔r y. Thus indeed it is k-overcounted.

Hence we need only consider x inside the tube. Now the thickness of the tube (i.e. the radius) being k, and the dimension of the space being d, and the length of the path P being ≤ 2r, hence the volume of the tube is ≤ 2r.C.kd, but we have so far considered only one y which is at distance ≤ 2r from origin. Hence if we consider the whole ball B(0, 2r), clearly the total number of point |N(k)| ≤ |B(0, 2r)|.r.C.kd.

 Continuing with our goal of proving Theorem 1 (a), we have X P [(x, y) is k − overcounted] = P [{0 ↔r u} ◦ {v ↔r y} ◦ {u ↔ x} ◦ {v ↔ x} ◦ {u ↔ v}] u,v:|u−x|≥k,|v−x|≥k X ≤ P [{0 ↔r u}]P [{v ↔r y}]P [{u ↔ x}]P [{v ↔ x}]P [{u ↔ v}] u,v:|u−x|≥k,|v−x|≥k where the last line follows from B-K inequality. Now note that every event here is translation invariant, which means that for all z, p, w ∈ Zd, we have P [z ↔ w] = P [z + p ↔ w + p]. Using this fact and writing u0 = u − x and v0 = v − x, we get X P [(x, y) is k − overcounted] ≤ P [{0 ↔r u}]P [{v ↔r y}]P [{u ↔ x}]P [{v ↔ x}]P [{u ↔ v}] u,v:|u−x|≥k,|v−x|≥k X = P [−x ↔r u0]P [v0 ↔r y − x]P [u0 ↔ 0]P [v0 ↔ 0]P [u0 ↔ v0] |u0|≥k,|v0|≥k so that we have X X X P [(x, y) is k − overcounted] ≤ P [−x ↔r u0]P [v0 ↔r y − x]P [u0 ↔ 0]P [v0 ↔ 0]P [u0 ↔ v0] x,y x,y |u0|≥k,|v0|≥k X X = P [x ↔r u0]P [v0 ↔r z]P [u0 ↔ 0]P [v0 ↔ 0]P [u0 ↔ v0] where y − x = z; x,z |u0|≥k,|v0|≥k X X X = P [x ↔r u0] P [v0 ↔r z]P [u0 ↔ 0]P [v0 ↔ 0]P [u0 ↔ v0] |u0|≥k,|v0|≥k x z X = E(|B(u0, r)|)E(|B(v0, r)|)P [u0 ↔ 0]P [v0 ↔ 0]P [u0 ↔ v0] |u0|≥k,|v0|≥k X =[G(r)]2 P [u0 ↔ 0]P [v0 ↔ 0]P [u0 ↔ v0] |u0|≥k,|v0|≥k 23 and now we can make use of the triangle condition of Hara and Slade. Since we have X P [u0 ↔ 0]P [v0 ↔ 0]P [u0 ↔ v0] < ∞ 0 0 d u ,v ∈Z hence we can find, given any ε > 0, a suitably large k > 0, such that X P [u0 ↔ 0]P [v0 ↔ 0]P [u0 ↔ v0] < ε. |u0|≥k,|v0|≥k And thus we get that X P [(x, y) is k − overcounted] ≤ ε[G(r)]2. x,y Lemma 5.3. X P [{0 ↔r x} ◦ {y ↔r x}] ≥ C[G(r)]2. x,y Proof. By translation invariance, X X P [{0 ↔r x} ◦ {y ↔r x}] = P [{0 ↔r x} ◦ {y − x ↔r 0}] x,y x,y X = P [{0 ↔r x} ◦ {z ↔r 0}] where z = y − x; x,z Now our idea is that, since we have kind of brought 0 ”in between” x and y, we now wish to separate the path between x and y through 0 into two parts x ↔ u and v ↔ y, such that |u − v| = k. We actually choose k k u = ( 2 , 0, ..., 0) and v = (− 2 , 0, ..., 0). We wish to show that X 1 P [{u ↔r−k x} ◦ {v ↔r−k y}] ≥ [G(r − k)]2. 2 x,y Note that k,however large, is fixed here. Also, we can show that G(r) ≤ G(r − k) ≤ G(r). (2k)d The second inequality is obvious since B(0, r − k) ⊆ B(0, r) ⇒ |B(0, r − k)| ≤ |B(0, r)| ⇒ G(r − k) ≤ G(r). To see the first, note that if {u ↔r−k x} ◦ {v ↔r−k y} then we need only change the status (openness or closedness) of at most (2k)d edges in the ball B(0, k) [since u, v are distance k/2 away from 0 and hence within this ball] so that there exists paths of length at most k between 0 and u and between 0 and v, and for each such change, we either change things by a factor of pc or 1 − pc, hence so we can find some constant C(k) such that P [{0 ↔r x} ◦ {0 ↔r y}] ≥ C(k)P [{u ↔r−k x}]P [{v ↔r−k y}]. Let l = r − k. Then clearly P [{u ↔l x} ◦ {v ↔l y}] ≥ P [u ↔l x, v ↔l y, C(u) 6= C(v)] where C(u) = {z ∈ Zd : z ↔ u} is the cluster around u, and C(u) 6= C(v) means that u not connected to v, and so clearly the paths joining u and x and v and y would have to be disjoint from each other [inequality follows from inclusion of events]. Thus P [{u ↔l x} ◦ {v ↔l y}] ≥P [u ↔l x, v ↔l y, C(u) 6= C(v)] X = P [C(u) = A]P [u ↔l x, v ↔l y, C(u) 6= C(v)|C(u) = A] A:{u↔lx}⊆A,v∈ /A X = P [C(u) = A]P [v ↔l y|C(u) = A] A:{u↔lx}⊆A,v∈ /A 24 X ≥ P [C(u) = A]P [v ↔l y off A] A:{u↔lx}⊆A,v∈ /A Now X P [u ↔l x]P [v ↔l y] = P [C(u) = A]P [v ↔l y]. A;{u↔x}⊆A Consequently, P [{u ↔l x} ◦ {v ↔l y}] =P [u ↔l x]P [v ↔l y] − {P [u ↔l x]P [v ↔l y] − P [{u ↔l x} ◦ {v ↔l y}]} X ≥P [u ↔l x]P [v ↔l y] − P [C(u) = A]{P [v ↔l y] − P [v ↔l y off A]} A:{u↔lx}⊆A X =P [u ↔l x]P [v ↔l y] − P [C(u) = A]P [v ↔l y and this path passes through A] A:{u↔lx}⊆A For fixed u, v, we have X P [{u ↔l x}]P [{v ↔l y}] = E(|B(u, l)|)E(|B(v, l)|) = [G(l)]2. x,y Let X I = P [C(u) = A]P [v ↔l y and this path passes through A]. A:{u↔lx}⊆A If the path between v and y of length ≤ l passes through A then there must be some z ∈ A which it goes through, and then we can easily conclude that there exists a path of length ≤ l between v and z, and another between z and y and these are non-intersecting except at z. Then by the B-K inequality, X I ≤ P [C(u) = A]P [{v ↔l z} ◦ {z ↔l y}] z∈A,{u↔lx}⊆A X ≤ P [C(u) = A]P [{v ↔l z}]P [{z ↔l y}] z∈A,{u↔lx}⊆A Now note that X P [C(u) = A] = P [u ↔l x, u ↔ z]. A:u↔lx⊆A,z∈A Hence we can consider the point z0 where the path between u and z intersects the path of length ≤ l between u and x. Then we can write, using B-K inequality, X X P [C(u) = A]P [{v ↔l z}]P [{z ↔l y}] ≤ P [u ↔l z0]P [z0 ↔l x]P [z ↔ z0]P [v ↔l z]P [z ↔l y] z∈A,{u↔lx}⊆A z,z0 And now summing over x, y we get X X X X P [u ↔l z0]P [z0 ↔l x]P [z ↔ z0]P [v ↔l z]P [z ↔l y] = P [u ↔l z0]P [z ↔ z0]P [v ↔l z] P [z0 ↔l x]P [z ↔l y] x,y z,z0 z,z0 x,y X X X = P [u ↔l z0]P [z ↔ z0]P [v ↔l z] P [z0 ↔l x] P [z ↔l y] z,z0 x y X = P [u ↔l z0]P [z ↔ z0]P [v ↔l z]E(|B(z0, l)|)E(|B(z, l)|) z,z0 X =[G(l)]2 P [u ↔l z0]P [z ↔ z0]P [v ↔l z] z,z0 X ≤[G(l)]2 P [u ↔ z0]P [z ↔ z0]P [v ↔ z] z,z0 25

Recall the open triangle condition: that there exists K ≥ 1 such that if u, v are vertices such that |u−v| ≥ K, then X 1 P (u ↔ x)P (x ↔ y)P (y ↔ v) ≤ , pc pc pc 10 d x,y∈Z hence now we choose k such that the above sum is < 1/2 by the open triangle condition. Thus we get 1 I ≤ [G(l)]2. 2 Hence X 1 P [{u ↔l x} ◦ {v ↔l y}] ≥ [G(l)]2. 2 x,y  Now to prove theorem 1(b), we define sup P G[δB(0, r) 6= φ] = Γ(r) d G⊆Z and we wish to show that C Γ(r) ≤ . r We shall prove that A Γ(3k) ≤ . 3k This suffices since the quantity Γ is monotonic in r.

We use theorem B. This means C C P (|C(0)| ≥ n) ≤ √1 ⇒ P G(|C(0)| ≥ n) ≤ √1 . pc n n k Let ε = ε(C1) > 0 to be chosen later. If δB(0, 3 ) 6= φ, then either the cluster size was large, i.e. say |C(0)| ≥ ε.9k, or not. Hence P G(δB(0, 3k) 6= φ) =P G(δB(0, 3k) 6= φ, |C(0)| ≥ ε.9k) + P G(δB(0, 3k) 6= φ, |C(0)| < ε.9k) ≤P G(|C(0)| ≥ ε.9k) + P G(δB(0, 3k) 6= φ, |C(0)| ≤ ε.9k) By theorem B we know that C P G(|C(0)| ≥ ε.9k) ≤ √ 1 . ε.3k Let us denote the second event by II. If II occurs then there must exist some j ∈ [3k−1, 2.3k−1] such that |δB(0, j)| ≤ ε.3k+1, since otherwise we would have 2.3k−1 2.3k−1 X X |δB(0, j)| ≥ ε.3k+1 = ε.3k+1.3k−1 = ε.9k, j=3k−1 j=3k−1 a contradiction. Choose the minimal such j. Also, there must exist some point which is at (shortest) distance exactly 3k away from 0, and clearly this shortest path must intersect δB(0, j), which means that there exists some v ∈ δB(0, j) such that there is a path from v to δB(0, 3k) without returning to B(0, j), which is equivalent to saying that v is connected to a vertex which is at least 3k−1 distance away (since j < 2.3k−1 and hence at least 3k−1 distance away from δB(0, 3k)), and the connecting path is outside B(0, j). Thus X P [δB(0, 3k) 6= φ, |C(0)| ≤ ε.9k|B(0, j)] ≤ P [|C(0)| ≤ ε.9k, δB(v, 3k−1) 6= φ, (5.1) v∈B(0,j) there exists a path from v to δB(v, 3k−1) off B(0, j)|B(0, j)] 26 X ≤ P [δB(v, 3k−1) 6= φ off B(0, j)|B(0, j)] v∈B(0,j) and hence X X P [II] ≤ P [B(0, j) = A] P [δB(v, 3k−1) 6= φ off A|A] A v∈B(0,j) X X = P [B(0, j) = A] P [δB(v, 3k−1) 6= φ off A] A v∈A X X ≤ P [B(0, j) = A] P [δB(v, 3k−1) 6= φ] A v∈A X ≤ P [B(0, j) = A]|A|Γ(3k−1) A X ≤ε.3k+1Γ(3k−1) P [B(0, j) = A] A =ε.3k+1Γ(3k−1)P [B(0, j) 6= φ] ≤ε.3k+1Γ(3k−1)P [B(0, 3k−1) 6= φ] as j ≥ 3k−1; ≤ε.3k+1Γ(3k−1)Γ(3k−1) =ε.3k+1[Γ(3k−1)]2

All the above calculation is with P G, and now taking supremum over G we get C Γ(3k) ≤ √ 1 + ε.3k+1.[Γ(3k−1)]2. ε.3k Now we want to prove A Γ(3k) ≤ , 3k so we define ε = A−4/3, and A is such that 2/3 A > [27 + C1]A . Then plugging in the recursion,by induction, we get C A2/3 A−4/3.3k+1.A2 A Γ(3k) ≤ 1 + < . 3k 32k−2 3k Hence proved. 

6. June 12 Goal: to prove that Theorem 2 implies Theorem 3. For this we shall consider the lazy simple random walk, which means that with probability 1/2 we stay put at our current state, and with probability 1/2 we proceed to choose a neighbour uniformly and move to that neighbour. Let

Pt(0, 0) = P0(Xt = 0). This quantity is non-increasing in t. This follows from the following theorem (can be found in the book ”Markov Chains and Mixing Times by D. A. Levin, Y. Peres, E. L. Wilmer, 2008, Lemma 12.2). Lemma 6.1. Let P be a reversible chain with respect to π. Ω |Ω| (i) The inner product space (R , <, ., >) has an orthonormal basis of real-valued eigenfunctions {fj}j=1 27

corresponding to the real eigenvalues {λj}. (ii) The matrix P can be decomposed as

|Ω| P t(x, y) X = f (x)f (y)λt . π(y) j j j j=1 Now here, if the original random walk, in which for any vertices i and j, we have 1 P (X = j|X = i) = n+1 n d(i) where d(i) is the degree of the vertex i, has transition matrix P , then clearly the lazy random walk will have transition matrix W = (P + I)/2. Now for the original random walk, due to the Perron-Frobenius theorem, we know that any eigenvalue λ is of absolute value ≤ 1. Now let v be an eigenvector of P corresponding to λ, then P v = λv ⇒ W v = (I + P )/2.v = (v + λv)/2 = (1 + λ)/2.v.

This shows that every eigenvalue of W must be non-negative. Let these eigenvalues be λj. Thus we have |Ω| t X 2 t Pt(0, 0) = P (0, 0) = π(0) [fj(0)] λj. j=1 t t+1 Now as 0 ≤ λj ≤ 1 hence λj ≥ λj and therefore clearly Pt+1(0, 0) ≤ Pt(0, 0). Hence non-increasing in t. Theorem 6.2 (Barlow - Coulhon - Kumagai ’05). Let G be an infinite graph with degrees bounded by D. Put T = r.|B(0, r)| for any particular r. Then we have C P (0, 0) ≤ , T |B(0, r)| where C = C(D) is a constant, and note that the above inequality holds for any pre-fixed r. To prove this, we need the following lemma: Lemma 6.3 (Aldous - Fill ’05). Consider a finite irreducible Markov chain and let τ be a stopping time. Assume that for some given a, Pa(Xτ = a) = 1. Also assume that 1 ≤ τ < ∞ almost surely. Then consider the Green’s function

Gτ (a, x) = E[# visits to x before τ]. Then G (a, x) τ = π(x), Ea(τ) where π is the stationary distribution.

Proof. Note that if we denote the Markov chain by Yt, then for all t, we have X 1Yt=x.1Yt+1=y = 1Yt+1=y. (6.1) x∈V We know that for all y ∈ V ,

1Y0=y = 1Yτ =y almost surely [Pa]. Because (6.1) holds hence we shall also have τ−1 τ−1 X X X 1Yt=x1Yt+1=y = 1Yt+1=y. (6.2) x∈V t=0 t=0 28

Taking expectation of the left hand side of (6.2), we have τ−1 τ−1 X X X X Ea[ 1Yt=x.1Yt+1=y] = Ea(1Yt=x1Yt+1=y) x∈V t=0 x∈V t=0 τ−1 X X = Ea(1Yt=x)Ea(1Yt+1=y|Yt = x) x∈V t=0 τ−1 X X =Ea[ P (x, y)1Yt=x] x∈V t=0 τ−1 X X = P (x, y)Ea[ 1Yt=x] x∈V t=0 X = P (x, y)Gτ (a, x) x∈V and on the right side of (6.2) we get τ−1 τ X X Ea[ 1Yt+1=y] =Ea[ 1Yt=y] t=0 t=1 τ−1 X =Ea[ 1Yt=y t=0

=Gτ (a, y) Thus we have X P (x, y)Gτ (a, x) = Gτ (a, y). x∈V

Hence if we define the vector v = (Gτ (a, y))y∈V , then v.P = v which means, since (due to the irreducibility of Markov chain) this lazy random walk has unique stationary distribution π, hence Gτ (a, y) = C.π(y) where C is a constant, for all y ∈ V . And as X Gτ (a, y) = Ea(τ) y∈V hence C = Ea(τ). Then clearly Gτ (a, y) = Ea(τ)π(y). Now to the proof of the theorem.

Proof. Due to the monotonicity of Pt(0, 0), we know that T −1 X Pt(0, 0) ≥ TPT (0, 0). t=0 Also, T −1 T −1 X X E0[# of visits to 0 before T ] = E0[ 1Yt=0] = Pt(0, 0). t=0 t=0 Let τ(0) = min{t ≥ T : Xt = 0}. 29

The idea would be to consider things on the finite graph B(0,T ). Consider the induced Markov chain X˜t on B = B(0, r). To define this, consider τ1 < τ2 < ... which are the times at which X visits B, and we define ˜ Xt = Xτt . Define ˜ τ˜(0) = min{t : τt ≥ T and Xt = Xτt = 0}. Thenτ ˜(0) is a stopping time. Note that ˜ E[# visits ofXt to 0 before T ] = Gτ˜(0)(0, 0). Now from above, we can see that T −1 X Pt(0, 0) = E[# visits to 0 before T ] = Gτ˜(0)(0, 0) t=0 and T −1 X Pt(0, 0) ≥ TPT (0, 0) t=0 hence

TPT (0, 0) ≤ Gτ˜(0)(0, 0). By the lemma proved before, we have ˜ Gτ˜(0)(0, 0) =π ˜(0)E0(˜τ(0)). Now π(x) =π ˜(x)π(B).

The reason behind this is that, from the above lemma we have already seen that Gτ (a, x) is proportional to π(x). Now (by a coupling argument) it is clear that

G˜τ˜(a, x) = Gτ (a, x) hence π(x) ∝ Gτ (a, x), andπ ˜(x) ∝ G˜τ˜(a, x) ⇒ π˜(x) ∝ π(x) and as X X π(x) = π(B), π˜(x) = 1, x∈B x∈B hence π(x) =π ˜(x)π(B).

Clearly, from the definition of the induced Markov chain, it is clear that for all x ∈ B, we have

Gτ (a, x) = G˜τ˜(a, x). Thus we have π(0) G (0, 0) = [T + E˜ { time to return to 0}], τ˜(0) π(B) ν where ν = law of Xγ , where γ = min{t ≤ T : Xt ∈ B}. Now using commute time identity we know that

E˜ν { time to return to 0} ≤ r.D.|B(0, r)| since the effective resistance between 0 and any other point in B(0, r) is at most r since each edge has P resistance 1, and since in the commute time identity, we have x∈B(0,r) π(x) ≤ |B(0, r)| clearly, and D here is just some suitable constant. 30

Hence we finally have π(0) π(0) r.D.|B(0, r)| π(0) C π(0) TP (0, 0) ≤ [T + r.D.|B(0, r)|] ⇒ P (0, 0) ≤ [1 + ] = [1 + D] = 1 . T π(B) T π(B) T π(B) π(B) But since deg(x) π(x) = P v∈V (deg(v)) for a simple random walk on a graph (finite), hence we have here: X π(x) ∝ deg(x) ⇒ π(B) ∝ deg(x) ≥ |B(0, r)|, x∈B and thus we have C π(0) C 1 ≤ π(B) |B(0, r)| for a suitable constant C. Hence proved.  We now use two problems of Exercise sheet 2.

1 X E (τ ) = π(x)[R (a ↔ z) + R (z ↔ x) − R (x ↔ a)] (6.3) a z 2 eff eff eff x∈V and this quantity is non-negative due to the triangle inequality of effective resistance.

Reff (a ↔ z) + Reff (z ↔ x) − Reff (x ↔ a) Px(τa < τz) = . (6.4) 2Reff (a ↔ z) Now we come to the proof of Theorem 3. For this we shall have to apply Theorem 2 in 2 different scales. Proof. Let ε > 0. Then by Theorem 2, there exists a constant A such that for all r ≥ 1, −1 2 2 −1 {A r ≤ |B(0, r)| ≤ Ar } ∩ {Reff (0, δB(0, r)) ≥ A r} with probability ≥ 1 − ε. Because this is true for all r ≥ 1, hence for r large enough, if we consider δ = A−1/2, then for δr too, we shall have the same relation, i.e. −1 2 2 2 2 −1 {A δ r ≤ |B(0, δr)| ≤ Aδ r } ∩ {Reff (0, δB(0, δr)) ≥ A δr} with probability ≥ 1 − ε. Now by the commute time identity, we can say that 3 E(τr) ≤ Ar . This is because, firstly, the effective resistance between 0 and any point on δB(0, r) is at most the length of the shortest path between the two points, which is ≤ r. And X π(x) ≤ |B(0, r)| ≤ Ar2 x∈B(0,r) for large enough A, hence we have the above inequality.

Now we have already observed that with probability ≥ 1 − ε, we have −1 Reff (0, δB(0, r)) ≥ A r, and we know that A−1 R (0 ↔ x) ≤ d(0, x) = δr = r, eff 2 since x ∈ δB(0, δr), and the effective resistance could be no more than the shortest graph distance between the two points. Hence we have, using (6.3) that 1 X E (τ ) ≥ π(x)[A−1r − δr] 0 r 2 x∈B(0,δr) 31 1 X A−1 = π(x)[A−1r − r] 2 2 x∈B(0,δr) 1 X A−1 = π(x) r 2 2 x∈B(0,δr) A−1 ≥C. r.A−1δ2r2 2 A−2 =C. δ2r3 2 Here we have used the fact that for a suitably large constant A, we have X X π(x) ≥ A−1|B(0, δr)| ⇒ π(x) ≥ A−1(δr)2 = A−1δ2r2. x∈B(0,δr) x∈B(0,δr) This shows that we indeed have a constant C such that −1 3 E0(τr) ≥ C r with probability 1 − ε.

To bound Pr3 (0, 0), we apply Theorem 2. We know that |B(0, Ar)| ≥ A−1(Ar)2 = A.r2 with probability ≥ 1 − ε. And we have just proved above that with probability ≥ 1 − ε, C.A3 E (τ ) ≥ C−1r3 ⇒ E (τ ) ≥ C−1(Ar)3 = r3 ≥ 2r3 0 r 0 Ar 2 for A large enough, thus 3 2r ≤E0(τr)

3 3 =E0[(τAr){1τAr ≤r + 1τAr >r }] 3 3 ≤r + E0[(τAr)1τAr >r ] =r3 + P [τ > r3]E (τ ) by the strong Markov property; Ar Xr3 Ar 3 3 3 ≤r + P [τAr > r ]A.(Ar) by Theorem 2 3 3 4 3 =r + P [τAr > r ]A r This shows that 3 −4 P [τAr > r ] ≥ A = C, and hence we get 2 3 2 C ≤[P (τAr > r )] 2 ≤[P (Xr3 ∈ B(0, Ar))] X 2 =[ Pr3 (0, y)] y∈B(0,Ar) X 2 ≤|B(0, Ar)| Pr3 (0, y) by Cauchy Schwarz; y∈B(0,Ar)

2 2 X π(y) ≤D.A r P 3 (0, y) P 3 (y, 0) by reversibility of chain; r π(0) r y∈B(0,Ar) 2 2 X ≤D1A r Pr3 (0, y)Pr3 (y, 0) y∈B(0,Ar) 2 2 =D1A r P2r3 (0, 0) 32

and thus we have C P 3 (0, 0) ≥ . 2r r2 Hence proved.  7. June 13 We shall first prove that (6.4) implies (6.3). We know that X Ea(τz) = Gz(a, x). x∈V We also know from (6.4) that

Reff (a ↔ z) + Reff (z ↔ x) − Reff (x ↔ a) h(x) = Px(τa < τz) = . 2Reff (a ↔ z) Now h(x) is a harmonic function with boundary values

h(a) = Reff (a ↔ z), h(z) = 0. But G (a, x) g(x) = z π(x) is also harmonic on G − {a, z} with the same boundary values, hence they must be equal. Hence proved.

Now we wish to prove (6.4). Consider the induced chain on x, a, z and call the transition matrix P˜. This is a reversible chain, hence has an electric network representation. We shall have π(a) π˜(a) = . π({a, x, z}) Now from what we have learned in the first two classes, 1 π(B) R˜eff (a ↔ z) = = , π˜(a).P˜a(˜τz < τ˜a) π(a)Pa(τz < τa) where π(B) = π({a, x, z}). Let the conductances be p between x and z, q between x and a and 1 between a and z. Then 1 p + p/q + 1 1 C = q + ,C = ,C = 1 + . x,z 1 + 1/p x,a 1 + 1/q a,z 1/p + 1/q Plugging in, we get what we require. Hence proved.

7.1. Random walks on planar graphs. Definition 7.1. A is a graph such that there exists a proper embedding of it in R2. Definition 7.2. A proper embedding maps vertices to points and edges to continuous curves in R2, such that if the edge (x, y) is mapped to a curve between x and y, then two edges do not intersect, except when e1 = (x, y) and e2 = (x, z), in which case e1 ∩ e2 = {x}. Definition 7.3. A planar map is a planar graph + an embedding that are viewed up to edge deformations. The lecture has talked about alternate ways of defining planar maps, but it is difficult to write that down without diagrams.

Definition 7.4. A face of a planar map is a connected component of R2 − { edges and vertices } up to deformation. Theorem 7.5 (Euler’s formula). A finite connected planar map satisfies # vertices - # edges + # faces = 2. 33

Definition 7.6. A planar triangle is a map such that every face has 3 edges. Note that in a planar triangle, since every edge is adjacent to 2 faces and each face has 3 edges, hence # faces = 2/3× # edges.

Theorem 7.7 (, K¨oebe ’36). Gievn any planar graph G = (V,E), with vertices v1, v2, ..., vn, 2 there exist n circles in R , say C1,C2, ..., Cn with disjoint interior such that Ci is tangent with Cj iff (i, j) ∈ E. Corollary 7.8 (Fary’s theorem). Any planar graph has a proper embedding with straigt lines. Just connect the centers of the circles corresponding to the circle packing diagram of a planar graph.

Proof of Circle Packing. Suffices to prove the theorem for triangulation, since if we do not have a planar triangle, we can simply add vertices to non-triangular faces to make them into triangles. Then we can apply the algorithm to perform the circle packing on this new graph, and then erase the circles corresponding to the extra vertices. Gievn a face {vi, vj, vk}, we can always find three circles Ci,Cj,Ck with radii ri, rj, rk respectively such that they are mutually tangent. The challenge is to make this work for all n vertices.

Note that if we consider the angle θ =< vj, vi, vk, then 2r r cos θ = 1 − j k . (ri + rj)(ri + rk) −→ But we won’t really use this formula. Let r = (r1, r2, ..., rn) be the ordered tuple of radii of the circles. For any vi ∈ V , we can define G−→r (vi) = sum of the angles at vi but we do not count the external angle at the outermost vertices. If vi is an internal vertex (i.e. does not meet the external face) then clearly

G−→r (vi) = 2π and if it is one of the 3 external vertices (meaning one of the 3 vertices which form the external triangular face) then if we can select the radii for the circles corresponding to these 3 vertices as equal, then the sum of the angles will be π/3. Thus our goal is to find ←−r such that

(G−→r (v1), ..., G−→r (vn)) = (π/3, π/3, π/3, 2π, 2π, ..., 2π). To this end, we define

δ−→r (vi) = G−→r (vi) − π/3, if i ∈ {1, 2, 3}; and

δ−→r (vi) = G−→r (vi) − 2π, when i ∈ {4, 5, ..., n}. Claim 1: Given such an −→r , a greedy construction gives the required circle packing.

This means that we first draw 3 circles mutually tangent with equal radii, C1,C2,C3, then if say v4 is adjacen to both v1 and v2, then there is only one way we can draw circle C4, tangent to C1 and C2, and within the region enclosed by C1,C2,C3. And thus we continue. Note that two circles corresponding to two adjacent points cannot intersect since then the angle at one of the external vertices will become < π/3, and also no undesired gap can be left since it will be > π/3.

Step 1: No matter what order we proceed with the construction of the circle packing, the locations of the circles (relative to the 3 external ones) are always the same. Also, no adjacent circles intersect, they are only tangent to each other.

Step 2: No circles intersect (except tangentially). 34

We call a cycle bad if we have 2 circles on it which intersect. Here it was shown in class that if such a cycle exists, then we can shorten the cycle (i.e. find another cycle which is a subset of this cycle) and which is also bad, and thus the argument can proceed by induction on the length of the cycle. Once again, impossible to explain without diagram.

Key observation: Consider a face (vi, vj, vk) with radii ri, rj, rk. If we increase ri and leave unchanged or decrease rj, rk, then the angle at i corresponding to this face, i.e. < vj, vi, vk will decrease. On the other hand, if we let ri, rj increase and rk decrease, then the angle at k will increase. Let n X 2 −→r = δ−→r (vi) . i=1 We have to show that in some sense, in the limit, this goes to 0.

Observation: Note that n X −→ G−→r (vi) = (|F | − 1)π, for all r . i=1 This is because, when we sum over all the angles formed at each one of the vertices, except for the external angles formed at the three outer vertices, then we may as well sum over the total angle in each of the triangular faces, except the external face, and since the total angle contributed by each face is π, we get the above. But we also know that at n − 3 of the vertices, we have angle 2π, and at the 3 outer ones we have angle π/3, hence n X G−→r (vi) = (|F | − 1)π = 2π(n − 3) + π/3 × 3 = (2n − 5)π. i=1 This also shows that n X δ−→r (vi) = 0. i=1

Algorithm: Plot the values of δ−→r (vi), 1 ≤ i ≤ n on the line. If this is already 0, then we are done. Otherwise, choose the maximal gap, i.e. choose subset S of V such that

− max δ−→r (v) + min δ−→r (v) v∈V −S v∈S is maximal. That is, S is to the right of the gap, and

max δ−→r (v) < min δ−→r (v). v∈V −S v∈S −→ −→ For some λ ∈ (0, 1), create new r = r λ, by making the transformation

ri 7→ λri for all vi ∈ V − S, ri 7→ ri for all vi ∈ S. −→ Then normalize this new r λ such that n X −→ ( r λ)i = 1. i=1

Claim 1: This increases δ−→r (vi), i ∈ V − S and decreases δ(vi), i ∈ S.

Claim 2: There exists a λ ∈ (0, 1) such that after the above change, the gap closes, i.e.

min δ−→r (vi) = min δ−→r (vi). i∈S λ i∈V § λ Claim 3: If −→r k gives the radii vector at the k-th step, then −→k −→∞ −→r k → 0, and r → r as k → ∞.

Proof of Claim 1: Consider vi ∈ V − S and a face {vi, vj, vk}. Case 1: If all of vi, vj, vk belong to V − S, then after the transformation, we have λri, λrj, λrk which means 35

all 3 sides get reduced by the same ratio, and hence the angle < vj, vi, vk does not change. Case 2: vi, vj ∈ V − S and vk ∈ S. Then

ri 7→ λri, rj 7→ λrj, rk 7→ rk.

This means that the angle < vj, vi, vk will increase, as clear from the key observation. Case 3: vi ∈ V − S and vj, vk ∈ S. Then

ri 7→ λri, rj 7→ rj, rk 7→ rk.

Then again from the key observation that the angle < vj, vi, vk will increase.

Similarly when we choose vi ∈ S, we can divide things into 3 possible cases, and show that the angle decreases.

Proof of Claim 2: We’ll first show that X lim δ−→r (vi) > 0 λ→0 λ vi∈V § but because we know that for all −→r , X δ−→r (vi) = 0

vi∈V hence we must have X lim δ−→r (vi) < 0. λ→0 λ vi∈S But since initially we started out with S entirely on the right and V − S entirely to the left, this means that eventually they must cross, and by continuity there must exist a λ where they just meet.

To show the above, we observe that in each face {vi, vj, vk} (in which at least one belongs in V − S) the sum of angles at the vertices of V − S converges to π asλ → 0. To see this, we again consider a few cases: Case 1: If all of vi, vj, vk belong to V − S, then the sum is already π. Case 2: If vk ∈ S and vi, vj ∈ V − S, then

rk 7→ rk, ri 7→ λri, rj 7→ λrj, which means that ri and rj become smaller and smaller as λ → 0, whereas rk remains the same, hence from the key observation we can deduce that the angle < vi, vk, vj goes to 0. Hence since the total sum is π, we know that

lim < vj, vi, vk+ < vi, vj, vk → π. λ→0

Case 3: If vk, vj ∈ S and vi ∈ V − S, then

rk 7→ rk, rj 7→ rj, ri 7→ λri,

which means that the angle < vj, vi, vk will go on increasing towards π.

From this we can conclude that X lim G−→r (vi) = π|F (V − S)|, λ→0 λ vi∈V −S where F (V − S) is the set of all faces, each of which has at least one vertex in V − S. Set F be the set of faces in which none of the vertices is in V − S. Now by Euler’s formula we know that if v is the number of vertices, f the number of faces and e the number of edges in a planar graph, then v − e + f = 2. 36

Now here, we have only triangular faces, hence each face has 3 edges, and since each of these edges can be shared by at most 2 faces, hence we have 3 e ≥ |F |. 2 Then we have, since there are at most |S| many vertices in the graph restricted to F , hence 3 |S| − |F | + |F | ≥ 2 ⇒ |F | ≤ 2|S| − 4. 2 But it can’t be an exact equality since then we would have for every vertex v in S, v is only adjacent to faces of F , which means that the vertices of S are not connected to V − S, although this is a connected graph. Contradiction! Hence we must have |F | ≤ 2|S| − 5. Now there are two cases to consider: (i) When |F | < 2|S| − 5, then since again by Euler’s formula, we have |F | = 2|V | − 4, (note that this time it is exact since we are considering a complete planar triangulation), hence

|F (V − S)| = |F | − |F | > 2|V | − 4 − 2|S| + 5 = 2|V − S| + 1 ⇒ |F (V − S)| > 2|V − S|. Thus from a pervious relation we have X X lim G−→r (vi) ≥ (2|V − S| + 1)π ⇒ lim δ−→r (vi) ≥ π > 0. λ→0 λ λ→0 λ vi∈V −S vi∈V −S Hence we are done in this case. (ii) If |F | = 2|S| − 5,

and one of the outermost vertices v1, v2, v3 is in V − S, then we have X lim G−→r (vi) ≥ 2|V − S|π, λ→0 λ vi∈V −S but since the angle at the external vertex which is ∈ V − S is only π/3, hence we get X lim δ−→r (vi) ≥ 2|V − S|π − [2|V − S| − 1]π − π/3 = 2π/3 > 0, λ→0 λ vi∈V −S hence we are good. (iii) Finally we consider |F | = 2|S| − 5,

and all of v1, v2, v3 are in S. We shall show that this will lead to a contradiction, unless S = V . Because |F | = 2|S| − 5,

hence this subgraph must be a perfect triangulation, by Euler’s formula. All the outer vertices v1, v2, v3 are inside the graph (since in S). If we now remove any vertex from V − S, then we shall lose the triangulation structure, and so this means that V − S = φ actually. Contradiction! 37

8. June 17 Proof of Claim 3: Let, after equalization (which follows from the previous claim),

t = min δ−→r (v) = max δ−→r (v). S λ v∈V −S λ 0 −→ If we denote δ(v) the original defects and by δ (v) the defects after the transformation r λ, and −→ X 2 ( r ) = δ−→r (v) , v∈V and X 0 −→ 2  = δ r λ (v) , v∈V then simply using the fact that for any −→r , we have X δ−→r (v) = 0, v∈V we have X X 0 −→ 2 −→ 2  −  = δ r (v) − δ r λ (v) v∈V v∈V X X −→ −→ 2 −→ −→ −→ = [δ r (v) − δ r λ (v)] + 2 (t − δ r λ (v))(δ r λ (v) − δ r (v)) v∈V v∈V X X = [δ(v) − δ0(v)]2 + 2 (t − δ0(v))(δ0(v) − δ(v)) v∈V v∈V 0 −→ −→ Now if t − δ (v) > 0 then that means v ∈ V − S, and hence since by the transformation r 7→ r λ, for all the points in V − S, the δ values move to the right, hence δ0(v) ≥ δ(v). On the other hand, if t − δ0(v) < 0, then v ∈ S and the transformation is such that for all points in S, their δ values move to the left, hence δ0(v) ≤ δ(v). Hence we can conclude that for all v ∈ V , we have X (t − δ0(v))(δ0(v) − δ(v)) ≥ 0, ⇒  − 0 ≥ [δ(v) − δ0(v)]2. v∈V Now note that there must exist at least one vertex v for which the change in the δ value spans at least half the gap minv∈S δ(v) − maxv∈V −S δ(v), which means that [min δ(v) − max δ(v)]2  − 0 ≥ v∈S v∈V −S . 4 Now since there are at most n − 1 gaps between consecutive δ(v) values, hence 1 1 min δ(v) − max δ(v) ≥ [max δ(v) − min δ(v)] > [max δ(v) − min δ(v)]. v∈S v∈V −S n − 1 v∈V v∈V n v∈V v∈V Hence  1 2  1 2 [min δ(v) − max δ(v)]2 ≥ [max δ(v) − min δ(v)]2 ≥ [δ(v)]2, v∈S v∈V −S n v∈V v∈V n P since maxv∈V δ(v) ≥ δ(v) and minv∈V δ(v) ≤ 0 since v∈V δ(v) = 0. Now this is true for all v ∈ V , hence X  = δ(v)2 ≤ n.[max δ(v) − min δ(v)]2. v∈V v∈V v∈V Combining all these we get 2 1  1 r    1  − 0 ≥ = ⇒ 0 = [1 − ]. 4 n n 4n3 4n3 Note that n is fixed. 38

So if −→r (k) is the value after the k-th iteration then

−→r (k) → 0 as k → ∞, and this also implies that −→r k → −→r ∞ as k → ∞, at least as a subsequential limit.

∞ Lastly let I ⊆ V be such that for all vi ∈ I, we have ri = 0, with I 6= V . Now as before, we can conclude that X lim G−→r k (vi) = π|F (I)|, k→∞ vi∈I where F (I) is the set of all faces with at least one vertex in I. Then put F to be the set of all other faces. If |F | < 2|V − I| − 5, then once again, in the same way as before, we shall have X lim δ−→r k (vi) > 0, k→∞ vi∈I which is a contradiction.

Now when |F | = 2|V − I| − 5,

then too we argue as before, dividing into two subcases, one where at least one of the outer vertices v1, v2, v3 is in I, and the other where none of them is in I, which basically cannot happen.

Corollary 8.1. Let G be a connected finite planar map, with external face {v1, v2, ..., vk}. Then there exists a

circle packing of G contained in the unit disc U = {z ∈ C : |z| ≤ 1} such that only the circles Cv1 ,Cv2 , ..., Cvk are tangent to δU.

Proof. Firstly, add one vertex v to the external face, and connect it with edges to each of v1, v2, ..., vk. This transforms the external face into a triangle. Then apply the circle packing theorem to this new graph G0, and rescale and translate if necessary to make the circle Cv corresponding to v coincide with δU. Now invert the whole figure with respect to Cv, i.e. take the transformation z 7→ 1/z, and this will bring all the other

circles inside δU, with Cv1 , ...Cvk touching δU. Hence proved. 

Lemma 8.2 (The Ring Lemma: Rodin-Sullivan). For all M ∈ N there exists A such that the following happens: Let C0 be a circle that is completely surrounded by at most M other circles C1,C2, ..., Cm [m ≤ M] which touch C0 and they touch the adjacent ones (i.e. Ci touches Ci+1 where m + 1 = 1). Let ri be the radius of Ci for 0 ≤ i ≤ m. Then for all 1 ≤ i ≤ m, we have r 0 ≤ A. ri Proof. Once again, cannot include a picture here, but trying my best to describe the idea of the argument. Let r0 = 1 without loss of generality. Then if say r1 is very small, then either the circle to the left of C1 or to the right of it must be very small too, since if they are both large, they will intersect each other. Without loss of generality, let the circle to the right of C1, say C2, is very small, then once again for the same reason, either the circle to the left of C1 or the one to the right of C2 must be small. Thus continuing we can see that the total number of circles surrounding C0 would have to exceed M, in order to cover the entire circumference. Contradiction! Hence proved.  39

Corollary 8.3. If G is a triangulation with bounded degree, that is there exists some M ∈ N such that for every vertex v ∈ G, the degree of v is ≤ M. We consider the circle packing P of G, and let (u, v) be an edge of G such that u and v are not on the external face, then there exists a constant A independent of u and v (and only depending on M) such that r A−1 ≤ v ≤ A. ru

Here we have at most M circles surrounding the circle Cu, one of which is Cv as u and v are adjacent, hence by the ring lemma, we know that ru/rv ≤ A. Conversely, for the exact same reason, rv/ru ≤ A.

We now consider infinite planar graphs which are locally finite, i.e. for all x, we have deg(x) < ∞. Then Claim: For all such planar graphs G there exists a circle packing.

Proof. Adding vertices and edges whenever necessary, it suffices to prove the claim for a triangulation. Let Vj be the ball of radius j centered at some vertex ρ in G. We apply the circle packing theorem to Vj, and let’s call that Pj. Now normalize everything so that the circle corresponding to ρ is δU, the unit circle.

Let v1, v2, ..., vM be the neighbours of ρ, then by the ring lemma, the radii of Cv1 ,Cv2 , ..., CvM are bounded (1) (1) above and below, for all Pj. Hence if the radius of Cv1 in the packing Pj is rj and the centre is cj , since both these are bounded sequences, it will have a convergent subsequence. Now, continuing by the diagonal argument, we can find a subsequence jk such that for all 1 ≤ i ≤ M, r(i) → r(i) and c(i) → c(i) as k → ∞. jk jk Definition 8.4. An infinite connected graph is one-ended if for all S ⊆ V with |S| < ∞, removing S from G leaves a unique infinite connected component.

For example, Zd, d ≥ 2 are all one-ended, but not Z.

Definition 8.5. Given a circle packing P = {Cv, v ∈ V } of a triangulation G, we define the carrier carr(P ) to be the union of all circles plus the space between any three circles the vertices corresponding to which form a face.

Definition 8.6. Given a circle packing P , a point z ∈ C is an accumulation point of P if each neighbourhood of z intersects infinitely many circles of P . We denote the set of all accumulation points of P as Z(P ). Remark 8.7. i) G is a one-ended planar triangulation if and only if carr(P ) is simply connected. ii) G is an infinite planar triangulation, then Z(P ) = δ(carr(P )).

One way to argue why the first remark is true, is as follows. Suppose carr(P ) is not simply connected, then it will have a hole inside it, say H. Now consider a circle C which completely encloses the hole inside it, but does not intersect the boundary of carr(P ). Then C will intersect finitely many circles of the circle packing P , say

S = {Cv ∈ P : C ∩ Cv 6= φ}

then |S| < ∞. Then we remove the finite subset of points v in G for which Cv ∈ S. Then there are two infinite components, one corresponding to the circles of circle packing P inside the disc enclosed by C, and one outside. Notice that the boundary of carr(P ) here serves as the set of all accumulation points, hence as we approach the boundary of the hole H, we get infinitely many circles of the circle packing.

The class of one-ended planar triangulation is also called -triangulation or triangulation of the plane.

2 Definition 8.8. Given D ⊆ R , write VD ⊆ V to be the set of all v ∈ V such that the circle Cv has centre in D. 40

9. June 19

Theorem 9.1. Let P = {Cv, v ∈ V } be a circle packing of a bounded-degree triangulation such that carr(P ) = R2. Then G(P ) is recurrent. To prove this we need the following lemma:

Lemma 9.2. There exist constants A, c > 0 such that for all R ≥ 1, we have: i) There exists no edge between VB(0,R) and V − VB(0,AR). ii) Reff (VB(0,R) ↔ VB(0,AR)) ≥ c. For now, we assume the lemma and try to prove the theorem. Proof of the theorem. Consider the unit current flow I from ρ to ∞. Need to show that the energy (I) = ∞. Consider the points u, v such that Cu and Cv have centers in the annulus between B(0,A) and B(0, 1), and such that (u, v) ∈ G. Then the sum of the energy of all such edges will be at least c, because from Thompson’s principle,

Reff (VB(0,1) ↔ VB(0,A)) = inf{(θ): θ is a flow between VB(0,1) and VB(0,A), ||θ|| = 1}, we can say that Reff (VB(0,1) ↔ VB(0,A)) ≥ c ⇒ (I) ≥ c. Now consider the edges between points the circles corresponding to which have their centers in the annulus between B(0,A2) and B(0,A3). Again the energy of all these edges will be at least c, by the above Lemma part (ii). We keep on proceeding like this. By Lemma part (i), it is clear that these are all mutually disjoint sets of edges, hence obviously (by adding in series) (I) = ∞. In fact, since in order to reach level distance R while proceeding like this, we need of the order of log R steps, hence we have

Reff (ρ ↔ V − VB(0,R)) ≥ c log R. Hence proved.  Proof of the lemma. i) We take the circle corresponding to the vertex ρ in the circle packing P to be the unit circle δU. Suppose there indeed exists an edge between VB(0,R) and V − VB(0,AR), then that means there exists two vertices u and v such that Cu has center pu in B(0,R) and Cv has center pv with ||pv|| > AR, where ||.|| is the Euclidean distance, such that (u, v) is an edge. But this means the following. The circles Cu and Cv would have to be tangent to each other, and so the radius of Cv, i.e. rv ≥ (A − 1)R. But at the same time, since Cρ is the unit circle and Cu has center withing the disc of radius R about ρ, clearly the radius ru of Cu cannot be more than (R − 1)/2, as Cρ and Cu must not intersect each other. Thus rv ru < R, rv ≥ (A − 1)R ⇒ > A − 1, ru but since the degree of any vertex in G is bounded, by the corollary of the Ring Lemma we know that u, v being adjacent, rv/ru must be bounded above. Hence A here cannot be arbitrarily large, it must have a fixed upper bound for all vertices. Hence proved.

ii) Recall the discrete Dirichlet principle which states that 1 = inf{(h): h : V → R, h(a) = 1, h(z) = 0} Reff (a ↔ z) where X 2 (h) = Ce[h(x) − h(y)] . e=(x,y) Note that here the conductance of each edge is 1. 41

Here we set for any vertex x such that the circle Cx has center in the annulus between B(0, AR) and B(0,R), the following value of h(x): dist(x, B(0,R)) h(x) = . (A − 1)R Here dist denotes the Euclidean and not the graph distance. Clearly |dist(x, B(0,R)) − dist(y, B(0,R))| |h(x) − h(y)| = (A − 1)R dist(x, y) ≤ (A − 1)R Note that here, the x and y on the left hand side denote the vertices of the graph, and those on the right denote the centers of the circles Cx and Cy corresponding to these vertices. And if x and y happen to be neighbours then clearly by the Ring Lemma, we shall have r y ≤ B for some constant B rx where B is not dependent on x and y. Then r + r y x ≤ B + 1. rx Then

dist(x, y) = rx + ry ≤ C.rx where C = B + 1. Hence we have C.r |h(x) − h(y)| ≤ x , when x ∼ y. (A − 1)R Hence 2 X X X (C.rx) |h(x) − h(y)|2 ≤ (A − 1)2R2 x∼y,x,y∈VB(0,AR)−B(0,R) x∈VB(0,AR)−B(0,R) y∼x,y∈VB(0,AR)−B(0,R)

X X C.Area(Cx) = constant altered suitably; (A − 1)2R2 x∈VB(0,AR)−B(0,R) y∼x,y∈VB(0,AR)−B(0,R) C X ≤ deg(x).Area(C ) (A − 1)2R2 x x∈VB(0,AR)−B(0,R) C.M X ≤ Area(C ) as degree bounded by M (A − 1)2R2 x x∈VB(0,AR)−B(0,R) C ≤ Area(B(0, AR) − B(0,R)) (A − 1)2R2 C = A2R2 as this area = πA2R2 − πR2; (A − 1)2R2 ≤C where the C in different lines could possibly be different constants. Then from the Dirichlet principle we know that the effective resistance is at least 1/C. Hence proved. 

Corollary 9.3. Let P = {Cv, v ∈ V } be a circle packing of a bounded degree triangulation with carr(P ) = R2 − {p} for some p ∈ R2. Then G(P ) is recurrent. Proof. One way to argue is to start the argument from R such that p ∈ B(0,R) and consider everything outside this ball. 42

Alternatively, we can make the circles go smaller and smaller as they approach p. Then we draw a Euclidean circle C around p. This will intersect finitely many circles of the circle packing P . Those circles of P which lie outside this circle C will again create a recurrent network by the same argument as before. As for the ones inside C, we can simply invert everything with respect to C, which then transfers the inner circles outside, and again the same argument can be applied. 

Fact: Proved by Nachmias, GG, Santo The graph remains recurrent iff carr(P ) = R2 − A where A is a polar set, i.e. a set which Brownian motion misses with probability 1.

9.1. Distributional limits. Also called the B-S limits or local limits or local (weak) limits. We say that a sequence of finite graphs Gn converges to an infinite random rooted graph (G, ρ) when for all r, we have

BGn (ρn, r) → BG(ρ, r) in distribution

where ρn is any uniformly randomly chosen vertex of Gn.

Theorem 9.4 (Benjamini - Schramm). If Gn are all planar, with bounded degree, and Gn → (G, ρ) then G is almost surely recurrent. Theorem 9.5 (Angel - Schramm). The d-limit of random planar triangulation exists (and is called UIPT). Fact: In UIPT, we have P [deg(ρ) ≥ k] ≤ e−ck for some constant c.

Lemma 9.6 (Benjamini - Schramm,’01). First we describe the setting: Suppose C ⊆ R2, finite subset of points. For w ∈ C, the isolation radius ρw is defined as

ρw = inf{|v − w| : v ∈ C − {w}}.

−1 Given δ ∈ (0, 1) and s ≥ 2, and w ∈ C, we say that w is (δ, s) supported if in the disk of radius δ ρw, there are more than s points outside any disk of radius δρw, that is −1 inf |C ∩ {B(w, δ ρw) − B(p, δρw)}| ≥ s. 2 p∈R −1 The idea is that, when we look at the annulus between B(w, ρw) and B(w, δ ρw), (note that within the ball B(w, ρw) there can be no point of C, by definition of isolation radius) there are at least s points of C within this annulus. But that is not all: these s points are not clustered together... but distributed uniformly enough,so that when we remove any disc of radius δρw from within the annulus, there are still at least s points of C in the annulus outside this removed disc.

Now the statement of the lemma: For all δ ∈ (0, 1), there exists c = c(δ) = O(δ−3) such that for all finite subset of points C of R2, and any s ≥ 2, the number of (δ, s) supported points in C is at most c(δ)|C|/s.

10. June 20 We restate the lemma as its statement has been made more precise today:

Lemma 10.1. There exists a constant A > 0 such that for all δ ∈ (0, 1), for all finite subsets C of R2, and for all s ≥ 2, the number of (δ, s) supported points in C is Aδ−2 log(δ−1)|C| ≤ . s 43 √ Proof. Assume that for all w ∈ C we have ρw ≥ 2, where ρw is the isolation radius. May need rescaling to ensure this. 2 Step 1: Let k ≥ 3 (to be chosen in Step 2). Let G0 be a tiling of R by 1 × 1 squares. Let G1 then be 2 a tiling of R on G0 by k × k squares [the edges of these larger squares must align with the edges of the 2 n n smaller squares]. Thus continuing, let Gn be a tiling of R on Gn−1 by k × k squares. Let G be the set of all squares of all Gn for every n. Assume without loss of generality that none of the points of C lies on the boundary of any square in any Gn for all n.

Now we need a definition:

0 Definition 10.2. For any n, we say that a square S ∈ Gn is s-supported if for all S ∈ Gn−1 we have |C ∩ (S − S0)| ≥ s. We define a flow on G as follows:  s 0 0 0 min{ 2 , |S ∩ C| : if S ∈ Gn+1,S ∈ Gn,S ⊆ S;  0 0 0  0 : if S ∈ Gn+1,S ∈ Gn,S * S; f(S ,S) = 0 0 0 −f(S, S ) : if S ∈ Gn+1,S ∈ Gn,S ⊆ S ;  0  0 : if S ∈ Gn,S ∈ Gm, |m − n|= 6 1. √ Now let us make a few observations: Firstly, every point w ∈ C has isolation radius ≥ 2, hence in any 0 small 1 × 1 square of G0 there can be at most one point of C. Thus clearly, for s ≥ 3, no square S in G0 can 0 0 0 have ≥ s/2 points, hence by the above definition of flow, f(S ,S) = |S ∩ C| whenever S ⊆ S ∈ G1, and so X X f(S0,S) = |C|. (10.1) 0 S ∈G0 S∈G1 For all b ∈ N, we have X X f(S0,S) ≥ 0. (10.2) 0 S ∈Gb S∈Gb+1 0 0 Now suppose S is s-supported. Then suppose S ∈ Gn, and we consider the subsquares S ∈ Gn−1,S ⊆ S. If there is no S0 such that |S0 ∩ C| ≥ s/2, then clearly f(S0,S) = |S0 ∩ C| for all such S0, and hence X X X f(S0,S) = f(S0,S) = |S0 ∩ C| = |S ∩ C| ≥ s, 0 0 0 0 0 S ∈Gn−1 S ∈Gn−1,S ⊆S S ∈Gn−1,S ⊆S as clear from the definition of s-supported sets. Then since the mass going out of S is at most s/2 (as ensured by the definition of the flow), hence X s f(S0,S) ≥ . 2 S0∈G 0 0 0 Now assume that there exists exactly one square S ∈ Gn−1,S ⊆ S such that S contains more than s/2 points of C. Then S0 sends s/2 points to S, but since S is s-supported, hence |C ∩ (S − S0)| ≥ s 00 00 00 which means that the other squares S ∈ Gn−1,S ⊆ S, will contribute |S ∩ C| many points each, hence X X f(T,S) = f(T,S)

T ∈Gn−1 T ∈Gn−1,T ⊆S X = f(S00,S) + f(S0,S) 00 00 00 0 S ∈Gn−1,S ⊆S,S 6=S X s = |S00 ∩ C| + 00 00 00 0 2 S ∈Gn−1,S ⊆S,S 6=S s =|C ∩ (S − S0)| + 2 44 3s ≥ . 2 Hence since the flow going out of S is at most s/2, hence in this case X f(T,S) ≥ s. T ∈G 0 00 Finally when there are more than one squares, say S and S in Gn−1 which are subsets of S and each contains more than s/2 points of C, each will send exactly s/2 points to S, and so S receives at least s points from its children squares, but sends out at most s/2, hence X s f(T,S) ≥ . 2 T ∈G Thus we conclude that when S is s-supported, X s f(S0,S) ≥ . (10.3) 2 S0∈G Now we claim that the total mass swallowed is at most |C|, i.e.

b X X X f(S0,S) ≤ |C|. (10.4) 0 n=1 S∈Gn S ∈G And the reason is as follows: b b X X X X X X X X f(S0,S) = [ f(S0,S) + f(S0,S)] 0 0 0 n=1 S∈Gn S ∈G n=1 S∈Gn S ∈Gn−1 S∈Gn S ∈Gn+1 X X X X = f(S0,S) + f(S0,S) 0 0 S∈G1 S ∈G0 S∈Gb S ∈Gb+1 X X =|C| + f(S0,S) by (10.1); 0 S∈Gb S ∈Gb+1 and by definition of the flow, the last sum is non-positive, hence the claim holds.

Then since each s-supported square swallows at least s/2 mass, hence clearly their number cannot exceed 2|C|/s.

Step 2: Now, using this theory, we try to prove the lemma using a random tiling. First, we put k = 20δ−2 (well, consider the positive integer which is closest to and higher than this number). Let β be uniformly 2 β β chosen from the interval [0, log k]. Then we take G0 to be a tiling of R in e × e squares, followed by G1 2 β β 2 which is a tiling of R on G0 by e .k × e .k squares. Thus continuing, we take Gn to be the tiling of R on β n β n Gn−1 by e .k × e .k squares.

Definition 10.3. We call a vertex w ∈ C to be a city in square S ∈ G if −1 −1 i) the edge length of S is in [4δ ρw, 5δ ρw], −1 ii) distance of w to the center of the square S is at most δ ρw.

Now notice that if w is a city in S ∈ Gn and w is (δ, s) supported then S must be s-supported. To see 0 0 0 −1 this, first remove a square S ∈Gn−1 and S ⊆ S such that S lies outside the ball B(w, δ ρw). Then since −1 w is (δ, s) supported hence within the ball B(w, δ ρw) there must be at least s points of C, hence even after removal of S0, the remainder of square S will contain s points of C at least. 45

0 −1 Now suppose we remove a square S ∈ Gn−1 from inside the ball B(w, δ ρw). The worst case scenario is when 0 −1 S ⊆ B(w, δ ρw) − B(w, δρw). β n−1 β n−1 But since this square is in Gn−1, hence it is a e .k × e .k square. And since the square S which contains w is in Gn, hence −1 β n −1 4δ ρw ≤ e .k ≤ 5δ ρw, hence 1 1 4δ−1ρ k−1 ≤ eβ.kn−1 ≤ 5δ−1ρ k−1 ⇒ δρ ≤ eβ.kn−1 ≤ δρ . w w 5 w 4 w 0 Hence the square S can be completely covered by a disc of radius δρw, and so since after removing this disc, −1 we still have at least s many points of C in the annulus between B(w, δ ρw) and B(w, δρw) (by definition of (δ, s) supported points) hence we are good.

Finally note that no square can contain more than O(δ−2) many cities. This is because, due to the 2 isolation radius, any city w inside the square S will have a disc of area πρw around it which contains only −2 2 −2 2 one point (w itself) from C. And thus, since the square S itself is of area between 16δ ρw and 25δ ρw, hence there can be at most c.δ2 cities in S, where c is some constant.

Now we want to show that given a (δ, s) supported point w of C, we have A P [w is a city for some S] ≥ . log(δ−1) Given this, if N be the number of (δ, s) supported points w ∈ C, then X E[#S that are s − supported] ≥E[ 1w is a city in S,w is (δ,s) supported] S∈G X X =E[ 1w is a city in S] w is (δ,s) supported S∈G X X = E[ 1w is a city in S] w is (δ,s) supported S∈G X A ≥ δ−2 log(δ−1) w is (δ,s) supported N.A.δ2 = log(δ−1) But the number of s-supported sets S is at most 2|C|/s, hence −2 −1 N ≤ A1δ log(δ )|C|/s. Hence once we prove the above claim, we are done. Now, for w to be a city in S, first of all the side length of S, assuming S is in Gn for some n, must satisfy, for some constant A, β n −1 −1 −1 e k ≈ A.δ ρw ⇒ β + n.2 log(δ ) ≈ log A + log(ρw) + log(δ )

Thus, remembering that β ∼ (0, log(k)), the probability of finding an n ∈ N such that there exists S ∈ Gn −1 −1 with w ∈ S and side length of S between 4δ ρw and 5δ ρw is = c/ log(k) for some constant c. Now, since there is a lot of space inside the square S we can adjust the position of the point w so that it is not −1 more than δ ρw distance away from the center of S. For this we may need to shift the square, but that only needs to change the probability by a constant amount. Thus this too is an event that happens with probability c1/ log(k) for some constant c1. Hence proved.

 46

(n) Corollary 10.4. Let Gn be a finite planar triangulation and ρn be a uniformly chosen vertex. Let P be a circle packing of Gn with Cρn = δU. Then there exists a constant C such that for all r ≥ 2, s ≥ 2, we have 2 2 Cr log r P [ for all p ∈ , |V −1 | ≥ s] ≤ . R B(0,r)−B(p,r ) s −1 Proof. Apply the lemma proved above with δ = r , s and C as the set of centers of P .  11. June 23 Lemma 11.1. Let G be a finite planar graph with degree of each vertex bounded by D. Let ρ be a uniformly randomly chosen vertex, then there exists C = C(D) such that for all k, we have log k k−1/3 log k P [ there exists B ⊆ V : ρ ∈ B,B connected , |B| ≤ C.k, R (ρ ↔ V − B) ≥ ] ≥ 1 − . eff C C Proof. Take r = k and s = r3 in the previous lemma (meaning in the previous corollary, of last day). Assume without loss of generality that we are in a triangulation. Take

B = VB(0,r)−B(p,1/r) = VB(0,k)−B(p,1/k) where p is any point in the ball B(ρ, r). Note that from the argument used in proving Theorem 9.1 using Lemma 9.2, we used the annulus trick to get that

Reff (ρ ↔ V − VB(0,r)) ≥ c1 log r = c1 log k.

Note that Cρ = δU is the unit circle, and the degree of every vertex here is bounded by D, hence by the ring lemma, even if we consider any vertex v which is adjacent to ρ, we shall have the ratio rv rρ

bounded above and below, hence rv cannot be too small. This means that p must be at least a constant distance away from the origin (which in this case is basically the center of the circle Cρ).

Then, assuming that the distance between p and the origin is at least constant A, clearly as we are considering the ball B(p, 1/k) about p, we would need constant ×k many ”annular steps” to reach the origin and thus, again, by the annular trick, the effective resistance will be

Reff (ρ ↔ VB(p,1/k)) ≥ c2 log r = c2 log k. Note that if we consider three vertices a, y, z then 1 1 1 ≤ + . Reff (a ↔ {y, z}) Reff (a ↔ y) Reff (a ↔ z)

This is because we know that for some constant C0, 1 =C0.Pa[ hit y or z before returning to a] Reff (a ↔ {y, z})

≤C0.Pa[ hit y before returning to a] + C0.Pa[ hit z before returning to a] 1 1 = + . Reff (a ↔ y) Reff (a ↔ z) Consequently, we shall have Reff (ρ ↔ V − B) ≥ c3 log k. Now by the last corollary of last day, i.e. Corollary 10.4, we know that 2 2 2 3 C.r log r −1 P [ for all p ∈ , |V −1 | ≥ s] = P [ for all p ∈ , |V −1 | ≥ r ] ≤ = C.r log r. R B(0,r)−B(p,r ) R B(0,r)−B(p,r ) r3 Thus we finally have log(k3) log k k−1 log k P [ there exists B ⊆ V : ρ ∈ B,B connected , |B| ≤ C.k3,R (ρ ↔ V −B) ≥ = ] ≥ 1− . eff C C0 C 47

Hence proved.  Corollary 11.2. If (G, ρ) is a distributional limit of bounded degree planar maps, then almost surely there exists C such that for all k, there exists Bk with

|Bk| ≤ C.k and Reff (ρ ↔ G − Bk) ≥ C. log k. Proof. Let Ak = {there exists no B ⊆ V : |B| ≤ C.k, Reff (ρ ↔ G − B) ≥ C. log k}. By the previous lemma, we have −1/3 P (Ak) ≤ A.k log k. Then consider the subsequence {A2j , j ∈ N}. Clearly, ∞ ∞ ∞ X X X j log 2 P [A j ] ≤ A. = A. < ∞. 2 2j/3 j=1 j=1 j=1

Hence by Borel Cantelli lemma, we know that A2j occurs finitely often. In particular, we could also have dropped the restriction |Bk| ≤ C.k. In any case, we are done with the proof.  Now for the proof of Theorem 9.4, using the above lemmas: Proof. First we define what a star-tree transformation is. Diagrams would have helped, but it is difficult for me to incorporate them. I shall try.

We would transform the star G to G∗. This consists of two steps: i) G → G0 in which we subdivide each edge radiating out of the center of the star (the vertex with high degree) into two. ii) For v ∈ G with dv = deg(v) > 3 replace it with a binary tree (in which we allow ≤ 3 degree for any vertex), and call that tree Tv, and it has dv leaves, and height dlog2 dve.

Now we have the following lemma: Lemma 11.3. Let G be a planar map. G∗ is the map obtained after any necessary star-tree transformation. We now assign the following edge resistances to G∗: if e∗ is any new edge created after the transformation, ∗ and which originates out of vertex v, i.e. e ∈ Tv, then 1 R ∗ = . e deg(v) If G∗ is recurrent, then so will be G. Proof. Assume that G is transient, then there must exist a flow θ from ρ to ∞ with energy (θ) < ∞. We will construct a flow θ∗ on G∗ from ρ to ∞ with (θ∗) < ∞. For this, let v be the root of the tree we obtain after the star-tree transformation at a certain site. Let v1, v2, ..., vn be the leaves of the trees (clearly v had n neighbours initially). Then if we look at any edge e in the tree, we define ∗ X θ (e) = θ(v, vj).

vj a descendent of e We can easily verify that the conditions of a flow are satisfied. Assume that e is at a height l or distance l from the root v. This means that there will approximately be (or rather not more than) 2h−l descendants. The energy of θ∗ at e is given by

∗ 2 1 X 2 Re(θ (e)) = { θ(v, vj)} dv vj a descendant of v

1 h−l X 2 ≤ 2 θ(v, vj) by Cauchy Schwarz. dv vj a descendant of v 48

Hence summing over all e at level l we get h−l X ∗ 2 2 X 2 Re(θ (e)) ≤ θ(v, vj) dv e at level l vj ,1≤j≤n since note that when we consider the different edges e at level l, we are actually summing over exhaustive and disjoint subsets of {vj} which are descendants of e. Finally, summing over all l we have h X ∗ 2 2 X 2 Reθ (e) ≤ θ(v, vj) . dv e∈Tv 1≤j≤n

Now the height h of the tree Tv will depend upon the degree dv of the vertex v, and since we try to have the most balanced tree, and since at level h, we have dv leaves, clearly h dv ≤ 2 × 2 . So we have X ∗ 2 X 2 X ∗ 2 X X 2 Reθ (e) ≤ 2 θ(v, vj) ⇒ Reθ (e) ≤ 2 θ(v, vj) = 2(θ) ∗ e∈Tv 1≤j≤n e∈G v∈G vj ∼v and so (θ∗) ≤ 2(θ). Hence proved.  Theorem 11.4. If (U, ρ) is a d-limit of finite planar maps, and there exists C such that P [deg(ρ) ≥ k] ≤ 2e−C.k, then U is almost surely recurrent. Proof. Let

(Gn, ρn) → (U, ρ).

Instead of choosing ρn uniformly in Gn we choose ρn by the stationary measure, that is deg(v) P [ρ = v] = . n 2|E| Because of this, and from the definition of distributional limit, we can conclude that

(U, ρ) = (U, X1) in distribution

where X1 is any random neighbour of ρ which is uniformly chosen.

U π If the average degree of Gn is ≤ D, then [here ρn indicates ρn is uniformly chosen, and ρn indicates ρn is chosen on the basis of stationary distribution]: X 1 P [(G , ρU ) ∈ A] = P [(G , v) ∈ A] n n n n v∈Gn X 2|E| deg(v) 1 = P [(G , v) ∈ A] . . n n 2|E| deg(v) v∈Gn X 2|E| deg(v) ≤ P [(G , v) ∈ A] . as deg(v) ≥ 1; n n 2|E| v∈Gn 2|E| X deg(v) = P [(G , v) ∈ A] n 2|E| n v∈Gn π ≤DP [(Gn, ρn) ∈ A] 49

∗ ∗ ∗ Now from (Gn, ρn) we make the star-tree transformation to get (Gn, ρn) where ρn is a uniformly chosen ∗ ∗ ∗ vertex of Tρn . And let (U , ρ ) be the star-tree transform of (U, ρ), where again ρ is a uniformly randomly chosen vertex of Tρ. Then we would have ∗ ∗ ∗ ∗ (Gn, ρn) → (U , ρ ) in distributional limit, since star-tree transform is only a local phenomenon.

12. June 24 A few observations:

i) (U ∗, ρ∗) is stationary. Note that the way we obtain (U ∗, ρ∗), we cannot say that it will be readily stationary, but the measure ∗ induced when we choose ρ uniformly out of the vertices in Tρ is absolutely continuous with respect to the measure induced when we choose ρ∗ according to the stationary distribution, and vice versa, hence it won’t really matter.

ii) For all n, we have −C.k P [degU (Xn) ≥ k] ≤ e . ∗ ∗ 00 1 Goal: (U , ρ ,R = , e ∈ Tv) is recurrent. e degU (v)

Proof. By Corollary 11.2, for all k there exists Bk such that |Bk| ≤ C.k and log k R (ρ∗ ↔ U ∗ − B ; R ≡ 1) ≥ almost surely on (U ∗, ρ∗). eff k e C Put ( 0 1, if e ∈ Tv, degU (v) ≤ A log k; Re = 1 , if e ∈ Tv, degU (v) > A log k. degU (v) Claim: log k R (ρ∗ ↔ U ∗ − B ; R0 ) ≥ . eff k e C Proof. We know that

∗ ∗ 0 1 Reff (ρ ↔ U − Bk; Re) = ∗ ∗ . πρ∗ Pρ∗ ( visit U − Bk before returning to ρ ) We claim that 1 P [ hit an edge with R0 < 1 before hitting U ∗ − B ] ≤ . e k k Now by the commute time identity, we know that

∗ X Eρ∗ [ time to escape Bk] ≤ Reff (ρ ↔ V − Bk) × πv,

v∈Bk but note that ∗ Reff (ρ ↔ V − Bk) ≤ k since |B | ≤ C.k, and also P π ≤ C .k, and so k v∈Bk v 1

3 1 P ∗ [ no. of steps to exit B > k ] ≤ by Markov’s inequality. ρ k k 3 Otherwise, we shall have some t ∈ {1, 2, ..., k } such that deg(Xt) ≥ A log k. But by stationarity and expo- nential tails, this event too will have probability ≤ 1/k. 50

Now, for all m > 0, we have log k R (B(ρ∗, m) ↔ U ∗ − B ; R0 ) ≥ − m, eff k e C0 ∗ ∗ 0 since the resistance between ρ and the boundary of B(ρ , m) is ≤ m since every edge has resistance Re ≤ 1. Now 1 R00 ≥ R0 for all e e A log k e pointwise. Hence using Rayleigh’s monotonicity to get 1 m R (B(ρ∗, m) ↔ U ∗ − B ; R00) ≥ − . eff k e C0A A log k Let A → ∞, we get that 1 R (B(ρ∗, m) ↔ ∞; R00) ≥ for all m. eff e C0A Hence by the annulus trick, in which we put annuli after annuli extending to infinity, and with the effective resistance across any adjacent annuli being at least a constant > 0, we can conclude that the effective ∗ resistance between ρ and ∞ is infinite, and hence the graph is indeed recurrent.  13. June 26 Theorem 13.1 (He-Schramm Theorem). G is a bounded-degree one-sided planar triangulation. 2 1) If P = {Cv, v ∈ V } be a circle-packing of G such that carr(P ) = R , then G is recurrent. 2) If carr(P ) = U 6= R2 (where U must be some simply connected set), then G is transient. 3) If G is transient, then there exists a circle-packing P such that carr(P ) = U where U is the unit disc.

For (1), note that if we consider Cρ = δU that is the unit circle, and two circles of radii R and 10R concentric with it, then by the annulus trick we can say that the effective resistance across the annulus between R and 10R, is at least some constant C > 0. Thus we can say that resistance across an annulus of same scale is bounded below by a constant.

But it is also bounded above by a constant. To see this, we need to use the random path lemma. We choose a point uniformly on the circle of radius 10R, say Q, and connect it by a straight line with the origin 0. Now there will be finitely many circles in the circle-packing P which intersect this line, and we can, using (possibly) the centers of these circles, get a discrete approximation of this path. Now consider a small circle of radius r somewhere inside the annulus between R and 10R. If the line joining Q and the origin is to pass through this small disc, then Q would have to lie in the arc on the circle of radius 10R bounded by the radii tangent to this small disc, and the length of that arc is C × (r/R) for some constant C.

Now we have to define something in a general set-up. Given a measure µ on self-avoiding paths between two points a and z, the function −→ f( e ) = Eµ[1the path goes through −→e − 1the path goes through ←−e ], is actually a unit flow. Now from Thompson’s principle we know that effective resistance is the infimum of the energies of all possible unit current flows, consequently here we shall have (since resistance of each edge is 1): X 2 X 2 Reff ≤ {f(e)} ≤ [P [ flow went through e] . e e So here, since we have found that the probability that the line joining 0 and Q will pass through a disc of radius r is C × r where C is a constant that will be bounded above and below no matter what r is and where the disc is located inside the annulus, hence

X re 2 C1 X C1 R ≤ C = area of disc with radius r ≤ × area of annulus = C eff 0 R R2 e R2 2 e e 51

2 for some constant C2, since the area of the annulus is proportional to R . Thus indeed the effective resistance is bounded above by a constant.

Now we come to the proof of part (3) of the He-Schramm theorem. Proof. Take an exhaustion of G as follows:

Vj = B(ρ, j) ∪ {all finite components of V − B(ρ, j)},B(ρ, j) according to graph distance.

So Vj is a triangulation except for the outer face. Apply the circle packing theorem, to get, for all j, a circle (j) j packing P = {Cv , v ∈ Vj} such that, if we call δVj the set of all vertices on the outer space, then only j the vertices v ∈ δVj have the property that Cv is tangent to the unit circle δU. Now we take a M¨obius j transformation so that the center of Cρ is at the origin.

Claim: There exists C > 0 such that j radius(Cρ) ≥ C for all j. Reason: By the annulus trick we can immediately conclude that 1 Reff (ρ ↔ ∞) ≥ C log( j ), radius(Cρ) for some constant C > 0, hence the above holds.

j jk This means that in the bounded sequence {radius(Cρ)}, we can find a convergent subsequence {radius(Cρ )}, but since we have found such a convergent subsequence for any vertex ρ, we can find, by the diagona argu- jk ment, a common subsequence {jk} such that for all v ∈ V , we have {radius(Cv )} convergent, so that we can consider the limit P = {Cv} of this subsequence of circle packings, which is again a circle packing of G. And carr(P ) ⊆ U the unit disc.

Let Z be the set of accumulation points of P .

Claim: Z = δU where δU is the unit circle.

I did not have time to verify this, but Professor asked us to convince ourselves that Z is indeed a closed curve.

Claim: For all  > 0 there exists J such that for all j ≥ J, we have 1 R (ρ ↔ v; V ) ≥ C. log( ) for all v ∈ δV . eff j  j Proof. Take Z = {z : dist(z, Z) ≤ }.

Then VU−Z is a finite set, hence there exists some J such that

VU−Z ⊆ Vj for all j ≥ J.

Then for all v ∈ δVj, the circle Cv must remain compressed in the space between Z and the boundary of Z and hence j radius(Cv ) ≤  for all v ∈ δVj. j j Now that means, since the distance between the center of Cρ and the center of Cv is a constant, and the j radius of Cv is , hence by the annulus trick we can conclude that 1 R (ρ ↔ v; V ) ≥ C log( ). eff j  52



Corollary 13.2. For v ∈ δVj we have j radius(Cv ) ≤ δ() such that δ(x) → 0 as x → 0. Now, at each P (j), the circles come closer and closer to δU, but we have to ensure that in the limit, the mass does not flow in. Let  > 0, and z ∈ Z, then consider the set of vertices

W = VB(z,). By the annulus trick we know that 1 R (ρ ↔ W ∩ V ; V ) ≥ C. log( ). eff j j  So by the random path method (not sure if this reasoning is correct) W ∩ Vj has to have diame- (j) j ter at most δ() in P . Also W ∩ Vj ∩ δVj 6= φ, hence in P all the centers of W ∩ Vj are close to δU. And since W ∩Vj is an increasing sequence of sets, we have z arbitrarily close to δU which implies that z ∈ δU.

Theorem 13.3. G is a transient, bounded degree, one-ended planar triangulation. Xn is a simple random walk on G, and Z(u) is the center of the graph circle Cu. Then with probability 1, the sequence Z(Xn) converges to some point in δU.

Lemma 13.4. Suppose v0 is such that |Z(v0)| > 1 − δ. Let t > 1. Then C P [there exists n such that |Z(X ) − Z(v )| > tδ] ≤ . v0 n 0 log t Proof of the theorem using the lemma. By the lemma, for all δ > 0 we have √ C P [there exists n such that |Z(X )| ≥ 1 − δ, and for all n ≥ n , |Z(X ) − Z(X )| ≤ δ] ≥ 1 − , 0 n0 0 n n0 log(1/δ) √ using t = 1/ δ. Take δk → 0 such that X C < ∞. log(δ−1) k k Then by Borel Cantelli lemma, we can conclude that, if we call the event √

there exists n0 such that |Z(Xn0 )| ≥ 1 − δ, and for all n ≥ n0, |Z(Xn) − Z(Xn0 )| ≤ δ = Aδ then {Ac } occur finitely often and hence the theorem is proved. δk  Proof of the lemma. Again, diagrams would have helped, but I could not incorporate any. We consider the point v0 and consider two possible cases: i)

radius(Cv0 ) ∼ δ. Then we also consider the boundary of the circle which is centered around v0 and radius tδ, but only the portion which is within U, the unit disc. Let U − B(v0, tδ) be denoted by Atδ. Then by the random path lemma, we have

Reff (Cv0 ↔ δU ∪ Atδ) ≤ C, some constant; and by the annulus trick,

Reff (Cv0 ↔ Atδ) ≥ C. log t. Now we have seen the following result before:

Reff (x ↔ {a, z}) Px(τa < τz) ≤ , Reff (x ↔ a) 53

and so using this fact and the above bounds, we get the desired result [because note that in order for the event described in the lemma to take place, the walker has to exit the region U ∩ B(v0, tδ) through the portion of the boundary which is distance tδ away from v0].

ii)Here

radius(Cv0 ) << δ.

Let us consider a square of side 1 (or some constant) which contains v0 inside it, with v0 being at most δ distance away from the right boundary. Consider the harmonic function h(.) which takes 0 on the left boundary and 1 on the right. If x, y are vertices then C |Z(x) − Z(y)| ≤ r ⇒ |h(x) − h(y)| ≤ . log(1/r) Now write h(x) = a and h(y) = b. Then A = {v : h(v) ≤ a}, and B = {v : h(v) ≥ b}. Consider the current flow of constant strength from left to right boundary. Then this induces a current flow of constant strength from A to B, and with potential difference (b − a). Thus by Ohm’s law, we have

Reff (A ↔ B) ≥ C(b − a). From this, and from the annulus trick, we get the proof. 

14. June 27 Today Professor talked about harmonic functions and ”final” behaviour of random walk on planar graphs.

When we consider simple random walk, the function h : V → R is called harmonic if 1 X h(x) = h(y). deg(x) y:y∼x We shall be interested primarily in bounded harmonic functions, which are non-constant. And we have already seen that such functions do not exist on finite networks when we impose the condition that the function has to be harmonic at every vertex.

Definition 14.1. A graph G is said to be Liouville if there are no bounded non-constant harmonic functions on G.

Lemma 14.2. Consider a simple random walk on G. If for all x, y, there exists a coupling (Xn,Yn) of two simple random walks such that (X0,Y0) = (x, y) and limn→∞ P [Xn 6= Yn] = 0, then G is Liouville.

Proof. Let if possible, h be a bounded harmonic function on G, and x and y any two vertices. Because Xn is a simple random walk, and h is a harmonic function, hence h(Xn) is a martingale. Let’s check this: Let Fn be the sigma-field generated by X1,X2, ..., Xn, for any n ∈ N. Then

E[h(Xn+1)|Fn] =E[h(Xn+1)|Xn] X = h(y)P [Xn+1 = y|Xn]

y:y∼Xn X 1 = h(y) deg(Xn) y:y∼Xn 1 X = h(y) deg(Xn) y:y∼Xn

=h(Xn) 54

as h is harmonic. Consequently the unconditional expectation of h(Xn) remains unchanged with n. Thus we have

|h(x) − h(y)| = |h(X0) − h(Y0)| = |h(Xn) − h(Yn)| ≤ 2 sup |h(x)| × P [XnYn] → 0 as n → ∞. x∈V Here we have used the fact that h is bounded, because that is why the supremum is finite. Hence h(x) = h(y) for any two vertices x, y ∈ V , and thus the function h is constant. Hence proved. 

Lemma 14.3. If there exists a coupling such that almost surely there exist n, k such that Xn = Yn+k, then G is Liouville. Proof. Again we proceed in the same way, but with

|h(x) − h(y)| = |h(X0) − h(Y0)| = |h(Xn) − h(Yn+k)| ≤ 2 sup |h(v)| × P [XnYn+k] = 0. v∈V  Now the property that a graph is Liouville essentially means that the random walker on the graph will eventually be sucked in one particular direction which is kind of the infinite direction.

From the previous lemma it is easy to see that any recurrent graph G is Liouville (one of the walkers can just wait until the other catches up).

Theorem 14.4 (Blackwell, ’55). Zd is Liouville, for all d ∈ N. Proof. Note that we can get a coupling separately for each coordinate. Which means that if in dimension d we denote the random walks as

Xn = (Xn,1,Xn,2, ..., Xn,d) and Yn = (Yn,1,Yn,2, ..., Yn,d), then for every 1 ≤ i ≤ d, once Xn,i = Yn,i, we have Xm,i = Ym,i for all m ≥ n. Then since by recurrence we know that Z is Liouville, hence this coordinate-wise coupling also allows us to conclude that Zd is Liouville.  An example of a non-Liouville graph: the binary tree. Let the root be ρ, and let the subtree to the right of ρ be Aright and the one to the left be Aleft. Now we define

fA(x) = Px[there exists n0 such that for all n ≥ n0,Xn ∈ Aright]. This function is harmonic and non-constant.

Fact: All bounded harmonic functions on the tree are linear combinations of such fA’s.

Theorem 14.5 (Benjamini-Schramm). If G is a bounded degree planar graph, then G is Liouville if and only if it is recurrent. Thus in this case, indeed transience implies non-Liouville.

Proof. When G is transient, bounded degree, one-ended,planar triangulation. Pack G with a circle packing P which is contained in the unit disc U, and let A ⊆ δU. Recall the previous day’s notation: for any vertex v ∈ G, Z(v) denotes the center of the circle Cv in the packing P . Now define

hA(x) = Px[ lim Z(Xn) ∈ A], n→∞ which is basically the probability that we exit through A. To see that it is indeed harmonic:

hA(x) =Px[ lim Z(Xn) ∈ A] n→∞ 55 X = Px[ lim Z(Xn) ∈ A|X1 = y]P [X1 = y|X0 = x] n→∞ y:y∼x X 1 = Py[ lim Z(Xn) ∈ A] n→∞ deg(x) y:y∼x 1 X = h (y) deg(x) A y:y∼x and it is obviously bounded. Also, as we have seen in the previous lecture, if we start from a point x which is less than δ close to A, where δ is a very small number, then we always remain very close to A, and so with very high probability we shall eventually converge in A. On the other hand, if we start from a point x which is δ close to the arc of δU diametrically opposite from A, then the probability of converging in A will be very low. Hence clearly this function is non-constant.  Remark 14.6. The exit measure is dense: every interval on δU has some positive probability that the walker will exit through it. But it is also non-atomic. The exit measure need not be absolutely continuous with respect to Lebesgue measure.

We call the space of all harmonic functions the Poisson boundary (did I write this correctly?).

Question: Are these all the bounded harmonic functions?

Theorem 14.7 (Angel, Barlow, Gurel, Nachmias). Could not catch one name. The answer to the above question is YES. For all h : V → R which are bounded, non-constant, harmonic, there exists g : δU → R such that

h(x) = Ex[g( lim Z(Xn))]. n→∞ Remark 14.8. Probably not relevant here, but since Professor mentioned, I am writing it here. We can pack a transient graph in any simply connected domain. Question: Consider a circle packing that completely fits inside a 1 × 1 square. Start your random walk from x somewhere in the middle. Is it true that there exists some constant C such that we always have

Px[exit from the top/left/right/bottom] ≥ C? By this, I mean that we can consider any one of the top/left/right/bottom boundaries.

In Z2, optional stopping theorem helps as we are in a martingale set-up.

Note that we know already:

Reff (a ↔ x) − Reff (x ↔ z) + Reff (a ↔ z) Px(τz < τa) = . 2Reff (a ↔ z) We now use Harnack inequalities to overcome the difficulties we have faced earlier.

Theorem 14.9 (Harnack inequality for circle packings). Consider a circle packing P of a bounded degree planar triangulation such that carr(P ) ⊆ B(x, 2r) which is the Euclidean ball of radius 2r around x. Let h : V → R be a positive function, defined and harmonic on VB(x,2r). Then max h(y) ≤ C · min h(y), y∈B(x,(2−α)r) y∈B(x,(2−α)r) for some constant C depending only on the degrees and α, with 0 < α < 2. 56

Consider the unit circle δU and a point a on it. Suppose, we have two random walks such that conditional on the event that we exit through a, the two walks get coupled with probability 1. We have the unconditional Harnack inequality, but here in this conditional set up, we need to apply it twice. Note that if we draw concentric circles about a until we reach the center of δU, i.e. the origin, then as long as these annuli are not too close to a, the probability of being sucked in at a is pretty much the same within each such annulus. But we still need to investigate the case where the random walker starts at some point near the center, then walks to a point b very close to the boundary δU, but b is still far from a, and then the walker walks from b to a, all the while keeping close to the boundary.

And this is where the boundary Harnack inequality helps.

Random transient triangulations: The constructions were done by Angel, Ray and Curien. They have some good properties: i) Unimodular; ii) Markovian; but the downside is that they all have unbounded degrees. And in the circle packing of unbounded degree graphs in unit discs, we can simply add a drift to make the walker spiral to the boundary, or in fact follow any weird curve.

Theorem 14.10. Let (G, ρ) be a unimodular, one-ended triangulation. Then the following conditions are equivalent: i) E[deg(ρ)] ≥ 6; ii) G can be circle packed in the unit disc U (hence transient); iii) G is non-Liouville. Also in this case, the random walk converges to the boundary δU with exit measure dense and non-atomic.