Markov Chains, Diffusion and Green Functions. Applications to Traffic Routing in Ad Hoc Networks and to Image Restoration
Total Page:16
File Type:pdf, Size:1020Kb
Markov chains, diffusion and Green functions. Applications to traffic routing in ad hoc networks and to image restoration Chaînes de Markov, diffusion et fonctions de Green. Applications au routage dans les réseaux ad hoc et à la restauration d’images Marc Sigelle Ian Jermyn Sylvie Perreau Mai 2005 2009D003 Janvier 2009 Département Traitement du Signal et des Images Groupe MM : Multimédia Markov chains, diffusion and Green functions - Applications to traffic routing in ad hoc networks and to image restoration Marc Sigelle∗ , Ian Jermyn† and Sylvie Perreau‡ [email protected] [email protected] [email protected] January 12, 2009 Abstract In this document, we recall overall main definitions and basic properties of Markov chains on finite state spaces, Green functions, partial differential equa- tions (PDE’s) and their (approximate) resolution using diffusion walks in a discrete graph. We apply then all these topics to the study of traffic propagation and repar- tition in ad hoc networks. Last we also apply this framework to image restoration (with and without boundaries.) ∗Institut TELECOM TELECOM ParisTech CNRS UMR 5141, 46 rue Barrault 75634 Paris Cedex 13 France †INRIA Ariana 2004 route des Lucioles 06904 Sophia Antipolis Cedex France ‡Institute for Telecommunications Research, University of South Australia Mawson Lakes, Adelaide Australia 1 Chaˆınesde Markov, diffusion et fonctions de Green - Applications au routage dans les r´eseaux ad hoc et `a la restauration d’images Marc Sigelle∗ , Ian Jermyn† et Sylvie Perreau‡ [email protected] [email protected] [email protected] 6 janvier 2009 R´esum´e Nous rappelons dans un premier temps les d´efinitions et propri´et´es fondamen- tales des chaˆınes de Markov `atemps discret et `anombre d’´etatsfini. Ce sch´ema permet la r´esolution (approch´ee) de certaines classes d’´equations aux d´eriv´ees par- tielles par fonctions de Green discr`etes. Il en r´esulte naturellement que l’on peut utiliser certaines classes de marches al´eatoiresdans des graphes finis pour r´esoudre ces probl`emes. Nous montrons enfin l’application de l’ensemble de cette th´eorie `a l’´etude de la propagation du trafic dans des r´eseaux ad hoc r´eguliers et `ala restau- ration d’images bruit´ees avec ou sans processus bords. ∗Institut TELECOM TELECOM ParisTech CNRS UMR 5141, 46 rue Barrault 75634 Paris Cedex 13 France †INRIA Ariana 2004 route des Lucioles 06904 Sophia Antipolis Cedex France ‡Institute for Telecommunications Research, University of South Australia Mawson Lakes, Adelaide Australia 2 1 Introduction In this document, we present and recall several basic definitions and properties of: • Markov chains on finite state spaces. • Green functions. • partial differential equations (PDE’s) and their (approximate) resolution using dif- fusion walks in a discrete graph. In particular the role of boundary and source conditions modeling has been emphasized. • application of these to traffic propagation and repartition in ad hoc networks, and to image restoration with eventual boundary process. We try to present these topics in a pedagogical way, hoping this aim will be achieved. 2 Markov chains on a finite state space - Recalls To start with we consider a discrete finite state space E with cardinal N . Then we consider E = IR|E| as the vector space of real bounded functions on E: E 7→ IR x 7→ f(x) . Two functions (or classes of functions) play a particular role in E: • the family of indicatrix functions: ex0 = 1lx=x0 ∀x0 ∈ E which are identified to the canonical base vectors of E . • the constant function 1 defined by: x 7→ 1 ∀x ∈ E Now a probability distribution P is a positive measure on E i.e., a (positive) linear form with total mass 1 on E. Defining: P r (X = x0) = P (ex0 ) = P (1lx=x0 ) ∀x0 ∈ E one has: X X P (f) = P (ex) f(x) = P r (X = x) f(x) x∈E x∈E = IE[ f ] X with P r (X = x) = 1 of course x∈E This last condition can be written as P (1) = 1 Also useful in the sequel will be the uniform measure on E defined by: 1 1 µ (e ) = = ∀x ∈ E . 0 x |E| N 3 2.1 stochastic matrices definition A stochastic matrix Q is such that Q 1 = 1 right-wise multiplication of a probability measure by a stochastic matrix Let P be a probability measure and Q a (stochastic) matrix in L(E). Then by definition: (PQ)(f) = P (Qf) ∀f ∈ E In particular one has: (PQ)(1) = P (1) = 1 and PQ is thus definitely a probability measure on E . Now, let y ∈ E. Then, the probability measure Qr = PQ applied to f(x) = 1lx=y writes: Qr (1lx=y) = (PQ)(ey) = P (Q(ey)) X = P r (X = x) Qx y x∈E 2.2 Markov chains on E Let us consider a Markov chain with initial associated probability measure P0 and tran- sition matrix Q. We have that: P r (X0 = x0, . .....XM = y) = P r (X0 = x0) Qx0 x1 ...QxM−1 y (1) so that by noting PM = P r (XM = .) one has very formally and concisely : M PM = P0 Q In particular: X P (y) = P r (X = y) = P r (X = x )(QM ) M M 0 0 x0 y x0∈E and thus P r (X = y | X = x ) = (QM ) M 0 0 x0 y A useful formula for the sequel is the following: let us compute the expectation of some function ψ ∈ E at step M (wrt. to previous Markov chain) . Then: M M IE[ ψ(XM ) ] = PM (ψ) = (P0 Q )(ψ) = P0(Q ψ) X M = P r (X0 = x0)(Q ψ)(x0) (2) x0∈E Results directly from previous formulaes. In particular: M ∀ψ ∈ E, IE[ ψ(XM ) | X0 = x0 ] = (Q ψ)(x0) (3) 4 2.3 invariant measure - convergence Recall that E is endowed with the L∞ norm: || f ||∞ = sup |f(x)| . x∈E so that the variation distance between two measures is defined equivalently as: || µ − ν ||V = sup |µ(f) − ν(f)| (4a) || f ||∞=1 X X = |µ(ex) − ν(ex)| = 2 − min(µ(ex), ν(ex)) (4b) x∈E x∈E and satisfies of course from the first previous equality (4a) : |µ(f) − ν(f)| ≤ || µ − ν ||V || f ||∞ ∀µ, ν, f Now, the Dobrushin contraction coefficient of a stochastic matrix A is defined as: 1 c(A) = sup || Ai(.) − Aj(.) ||V , 0 ≤ c(A) ≤ 1 2 i,j It ensures that Winkler (1995) : || (νA) − (µA) ||V ≤ c(A) || ν − µ ||V ∀µ, ν, A c(AB) ≤ c(A) . c(B) ∀A, B In the case when A is strictly positive, one has from the second equality (4b) : 0 ≤ c(A) < 1 (hence the contraction property) In this case, Perron-Frobenius theorem ensures a unique invariant measure µ of A i.e., : (µA) = µ Thus one has: M M M ∀M ≥ 0, || (P0 A ) − µ ||V = || (P0 A ) − (µA ) ||V M ≤ c(A) || P0 − µ ||V ∀ initial P0 (5) M ⇒ the measure P0 A converges to µ as M → +∞ , whatever initial measure P0 . This can be generalized when some power of A, Ar is strictly positive (r > 1), although A itself may be not so: Indeed let us decompose in an euclidian manner: M = nr + p, 0 ≤ p < r : nr+p p r n p r n p || P0 A − µr A ||V = || P0 (A ) A − µr (A ) A ||V r n p p ≤ c(A ) || P0 A − µr A ||V 0 ≤ p < r r n p ≤ c(A ) c(A) || P0 − µr ||V 0 ≤ p < r (6) 5 3 Discretization of a class of linear (elliptic) PDEs We follow Ycart (1997); Rubinstein (1981) with slight changes. Find f ∈ E such that: γ (A − I) f = φ , A ∈ L(E) (eventually) subject to (7) f(xs) = b(xs) known ∀xs ∈ B ⊂ E Here γ is a constant, and B represents the boundary of domain E wiewed as a multi- dimensional (in general 2D) discrete lattice. 3.1 a linear algebra point of view We split E in boundaries/non-boundaries, so that the following decomposition holds: E = E˜ ∪ B (state space) E = E˜ ⊕ B (functional vector space) f = f˜ + b (functions) Problem (7) writes thus: γ (A − I) f˜ = φ + γ (I − A) b with A ∈ L(E) Now let us note P˜ the linear projector on susbspace E˜ with kernel B . One has: f˜ = P˜ f = P˜f˜ and P˜ b = 0 . Left-application of P˜ to previous equation yields: γ (A˜ − I) f˜ = ψ = P˜ φ − γ P˜ A b with A˜ = PA˜ ∈ L(E˜) (8) Note that PA˜ P˜ = A˜P˜ and P˜ φ are the restrictions of A (resp. φ) to E˜. In the sequel A will often be a stochastic matrix (see below the Poisson-Laplace case) so that 1 is an eigenfunction of A with eigenvalue λ1 = 1. In many cases the multiplicity of λ1 is 1 (see below) and all other eigenvectors have eigenvalues such that |λi| < 1. Thus solving (8) with proper invertibility conditions yields: 1 f˜ = − (I − A˜)−1 ψ (9) γ 3.1.1 an example: the Laplace and Poisson equations E is now a discrete sublattice of ZZ2 with lattice step h (and thus unit cell size: h × h .) This sublattice is endowed with a non-oriented, connected graph structure, the neigbor- hood relationship being noted as xt ∼ xs ( e.g. 4-connectivity .) Now: X [ f(xt)] − 4 f(xs) ∆f(x ) ≈ xt∼xs ⇒ ∆f ≈ γ (A − I) f s h2 4 1 X with γ = and (Af)(x ) = [ f(x )] ∀x ∈ E (averaging operator) h2 s 4 t s xt∼xs ˜ X ˜ In this case: B(xs) = γ P A b(xs) = b(xt) ∀xs ∈ E .