
Random graph models References The best reference for this chapter that fits very well into this course mindset is: Barabási et al.(2016). Note that an online and interactive version is available at: http://networksciencebook.com/. Other good references for this chapter are (by order of relevance): Hofstad(2016), Durrett(2007), and Chung and Lu(2006). Other traditional books (but heavier on the maths side) written by "big names" of the field are Janson et al.(2011) and Bollobás(1998). 1.1 Some reminders 1.1.1 Probability toolbox Proposition 1.1.1. • For a; b constants, E(aX + b) = aE(X) + b • E(X1 + ··· + Xm) = E(X1) + ··· + E(Xm); • Let X be a r.v., A an event and 1A(X) the indicator that event is realized by X. Then: E 1A(X) = Pr(X 2 A): 2 Definition 1.1.2. Var(X) = E X − E(X) . Proposition 1.1.3. • Var(X) = E(X2) − (E(X))2; • For a; b constants, Var(aX + b) = a2 Var(X) • If X1;:::;Xm are mutually independent r.v., then Var(X1 + ··· + Xm) = Var(X1) + ··· + Var(Xm). • If we don’t have this independence, then Var(X1 +···+Xm) = Var(X1)+···+ P Var(Xm) + i6=j Cov(Xi;Xj), where Cov(Xi;Xj) = E(XiXj) − E(Xi)E(Xj). 1.1.2 Basic probability laws Definition 1.1.4. Let X be a random variable. We say X is generated around a Bernoulli law of parameter p 2 [0; 1], and denote X ∼ Ber(p) if: 1. X takes values in f0; 1g (almost surely); 2. Pr(X = 1) = p and Pr(X = 0) = 1 − p. 1 1.1. SOME REMINDERS Example 1.1.5. A Ber(p) model the result when we biased coin toss (p is the probability of winning the coin toss). Proposition 1.1.6. Let X ∼ Ber(p). We have EX = p and Var X = p(1 − p). Definition 1.1.7. The binomial distribution with parameters and p, denoted Bin(n; p), is the discrete probability distribution of the number of successes in a sequence of n independent Bernoulli trials of parameters p. Proposition 1.1.8. If (Xi)i=1;:::;n is a sequence of n i.i.d. random variable dis- P tributed according to Ber(p), then i Xi ∼ Bin(n; p). n Corollary 1.1.9. Let X ∼ Bin(n; p). Then: Pr(X = k) = pk(1 − p)n−k. k Moreover, EX = np and Var X = np(1 − p). Definition 1.1.10. The geometric distribution of parameter p, denoted Geo(p) is the probability of the number of Bernoulli trials (of parameter p) needed to get one success. In particular, if X ∼ Geo(p), then Pr(X = k) = (1 − p)k−1p. 1 1 − p Exercise 1.1.11. Let X ∼ Geo(p). Show that X = and Var p = . E p p2 1.1.3 Concentration of random variables First moment inequalities Proposition 1.1.12 (Markov’s inequality). Let X be a random variable, with pos- itive values, and a 2 R. We have: X Pr X ≥ a ≤ E : a Proof. EX ≥ E X1X≥a ≥ aE 1X≥a = a Pr X ≥ a : 1 Remark 1.1.13. By letting a = t X, we obtain Pr(X ≥ t X) ≤ . The con- E E t vergence speed in 1=t is rather slow, and depending on the situations might not be strong enough. Corollary 1.1.14 (First moment method). Let X be a positive, integer valued ran- dom variable. We have: Pr(X 6= 0) ≤ E(X): The first moment is an upper bound on the probability that a integer random variable is not equal to zero. Proof. Since X is integer valued, we have Pr(X 6= 0) = Pr(X > 0) = Pr(X ≥ 1), and from there we can use Markov inequality. 2 Maximilien Dreveton CHAPTER 1. RANDOM GRAPH MODELS Application 1.1.15 (Union bound). Let A1;:::;Am be a collection of events. Then, m X Pr(A1 [···[ An) ≤ Pr(Ai): i=1 Pm This can be shown by using the first moment method on X = i=1 1Ai and observe that fX > 0g = A1 [···[ Am. Remark 1.1.16. The first moment method is generally used when we have a se- quence of integer, positive r.v. Xn such that EXn ! 0. In that case, Xn ! 0 almost surely. Remark 1.1.17. We could think that, if EXn ! +1, then Pr(Xn > 0) ! 1, but this isn’t true, as the next example shows. 2 Example 1.1.18. Let’s take Xn such that Xn = n with probability 1=n and Xn = 0 otherwise. Then E(Xn) = n ! +1 but Xn ! 0. Loosely speaking, this can happen because the variance of X is very large. Second moment inequalities Proposition 1.1.19 (Chebyshev’s inequality). Let X be a random variable, and a > 0. We have: Var X Pr X − X ≥ a ≤ : E a2 Proof. Markov inequality to Y = (X − EX)2. r 2 Example 1.1.20. Let X be Gaussian N (0; σ2). Then jXj = σ . Markov E π applied to jXj gives us: r 2 σ P r(X ≥ a) ≤ ; π b while Chebyshev gives: σ 2 Pr(X ≥ a) ≤ : a If a is large, Chebyshev gives a stronger bound. Application 1.1.21 (Weak law of Large Numbers). X1;:::Xn be independent r.v. with mean µ and variance σ2 < +1. Then: X + ··· + X Pr 1 n − µ > ! 0: n Actually, with extra work, the condition σ2 < +1 isn’t needed. With lot of extra work, we can show that the convergence is almost surely (and not in probability, as we have here). X + ··· + X Proof. Chebychev at U := 1 n , which has a mean µ and variance σ2 n n gives: σ2 Pr jU j ≥ ≤ ! 0 n n2 Maximilien Dreveton 3 1.1. SOME REMINDERS Corollary 1.1.22 (Second moment method). Let X be a positive random variable. We have: Var X (X2) Pr X = 0 ≤ = E − 1: 2 2 EX (EX ) Proof. Chebychev with a = EX. Var X Pr X = 0 ≤ Pr X − X ≥ X ≤ : E E 2 EX The first inequality comes from X − EX ≥ EX ) X ≤ 0 or X ≥ 2EX. Concentration sum of i.i.d. random variables Proposition 1.1.23 (Hoeffding’s inequality). Let Xi be some independent random Pn values, such that ai ≤ Xi ≤ bi, and Sn = i=1 Xi. For t > 0, we have: 2t2 Pr S ≥ S + t ≤ exp − ; n E n P 2 i(bi − ai) 2t2 Pr S ≥ S − t ≤ exp − ; n E n P 2 i(bi − ai) 2t2 Pr jS − S j ≥ t ≤ 2 exp − : n E n P 2 i(bi − ai) More details about concentration inequalities can be found in the chapter 2 of Vershynin(2018). 1.1.4 Exercises Exercise 1.1.24. Let throw m balls into n bins. What can we say about the probability that one bin is empty: 1. If m = (1 + )n log n; 2. If m = (1 − )n log n. Exercise 1.1.25. 1. What’s the ‘typical’ position of a simple random walk after n steps? 2. What’s the longest run of Heads in n flips of a fair coin? 3. What is the maximum of n independent standard normal RV’s? 1.1.5 Graph Theory Definition, vocabulary Definition 1.1.26. A graph G is a pair (V; E), where V is a (finite) set, whose elements are called nodes (or vertices, or points) and E is a set of ordered node pairs called edges (or links, lines, bonds). Some vocabulary: 4 Maximilien Dreveton CHAPTER 1. RANDOM GRAPH MODELS • If (ij) 2 E () (ji) 2 E, the graph is said undirected. (It means that if there is a link going from i to j, there exists the same link in opposite direction). • If for all nodes i, (ii) 62 E, we say there is no self-loops. Definition 1.1.27. • We call path of G of length k a sequence e1; : : : ; ek of edges ei = (vi−1; vi) where the vi are distinct vertices. • A k-cycle is a path of legnth k that starts and ends at the same vertex. • Let suppose G is undirected. We say that two nodes u; v are connected if there exists a path going from u to v. We note u $ v. Proposition 1.1.28. $ is an equivalence relationship. In particular, we can par- tition the nodes into equivalent classes, that we will call connected components. Proof. We have u $ u (path of length 0), if u $ v and v $ z, then u $ z (by combining the two paths; transitivity). Finally, u $ v implies v $ u (the same path, on the opposite direction; symmetry). Definition 1.1.29. If G has only one equivalent class under the relation $, we say G is connected. We say G is disconnected otherwise. In particular, if a graph is connected, it means for every node i and j, there exists a path going from i to j. Definition 1.1.30. Let i; j be two nodes. We call the distance between i and j, and denote d(i; j) the length of the shortest path between i and j. If i 6$ j, then d(i; j) := +1. Definition 1.1.31 (Diameter). We call diameter of a graph the largest distance between any pair of connected vertices. Graph Theory and Linear Algebra Definition 1.1.32. We call adjacency matrix (denoted by A), the binary matrix such that Aij = 1 iff (ij) 2 E.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages16 Page
-
File Size-