Introduction to Random Fields and Scale Invariance Hermine Biermé

Introduction to Random Fields and Scale Invariance Hermine Biermé

Introduction to random fields and scale invariance Hermine Biermé To cite this version: Hermine Biermé. Introduction to random fields and scale invariance. 2017. hal-01493834v2 HAL Id: hal-01493834 https://hal.archives-ouvertes.fr/hal-01493834v2 Preprint submitted on 5 May 2018 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Introduction to random fields and scale invariance Hermine Bierme´ Abstract In medical imaging, several authors have proposed to characterize rough- ness of observed textures by their fractal dimensions. Fractal analysis of 1D signals is mainly based on the stochastic modeling using the famous fractional Brownian motion for which the fractal dimension is determined by its so-called Hurst param- eter. Lots of 2D generalizations of this toy model may be defined according to the scope. This lecture intends to present some of them. After an introduction to random fields, the first part will focus on the construction of Gaussian random fields with prescribed invariance properties such as stationarity, self-similarity, or operator scal- ing property. Sample paths properties such as modulus of continuity and Hausdorff dimension of graphs will be settled in the second part to understand links with frac- tal analysis. The third part will concern some methods of simulation and estimation for these random fields in a discrete setting. Some applications in medical imaging will be presented. Finally, the last part will be devoted to geometric constructions involving Marked Poisson Point Processes and shot noise processes. 1 Random fields and scale invariance We recall in this section definitions and properties of random fields. Most of them can also be found in [22] but we try here to detail some important proofs. We stress on invariance properties such as stationarity, isotropy, and scale invariance and il- lustrate these properties with typical examples. Hermine Bierme´ LMA, UMR CNRS 7348, Universite´ de Poitiers, bd Marie et Pierre Curie, 86962 Chasseneuil, France, e-mail: [email protected] 1 2 Hermine Bierme´ 1.1 Introduction to random fields As usual when talking about randomness, we let (W;A ;P) be a probability space, reflecting variability. 1.1.1 Definitions and distribution Let us first recall the general definition of a stochastic process. For this purpose we d have to consider a set of indices T. In this lecture we assume that T ⊂ R for some dimension d ≥ 1. Definition 1. A (real) stochastic process indexed by T is just a collection of real random variables meaning that for all t 2 T, one has Xt : (W;A ) ! (R;B(R)) measurable. Stochastic processes are very important in stochastic modeling as they can mimic numerous natural phenomena. For instance, when d = 1, one can choose T ⊂ R (seen as time parameters) and consider Xt (w) as the real value of heart frequency at time t 2 T with noise measurement or for an individual w 2 W. Note that, in practice data are only available on a discrete finite subset S of T, for instance each millisec- 2 ond. When d = 2, choosing T = [0;1] , the value Xt (w) may correspond to the grey level of a picture at point t 2 T. Again, in practice, data are only available on pixels S = f0;1=n;:::;1g2 ⊂ T for an image of size (n + 1) × (n + 1). In general we talk about random fields when d > 1 and keep the terminology stochastic process only for d = 1. Since we have actually a map X from W × T with values in R we can T T also consider it as a map from W to R . We equip R with the smallest s-algebra T C such that the projections pt : (R ;C ) ! (R;B(R)), defined by pt ( f ) = f (t) are T measurable. It follows that X : (W;A ) ! (R ;C ) is measurable and its distribu- tion is defined as the image measure of P by X, which is a probability measure T on (R ;C ). An important consequence of Kolmogorov’s consistency theorem (see [37] p.92) is the following equivalent definition. Definition 2. The distribution of (Xt )t2T is given by all its finite dimensional distri- bution (fdd) ie the distribution of all real random vectors (Xt1 ;:::;Xtk ) for k ≥ 1;t1;:::;tk 2 T: Note that joint distributions for random vectors of arbitrary size k are often dif- ficult to compute. However we can infer some statistics of order one and two by considering only couples of variables. 2 Definition 3. The stochastic process (Xt )t2T is a second order process if E(Xt ) < + ¥, for all t 2 T. In this case we define • its mean function mX : t 2 T ! E(Xt ) 2 R; • its covariance function KX : (t;s) 2 T × T ! Cov(Xt ;Xs) 2 R. Introduction to random fields and scale invariance 3 A particular case arises when mX = 0 and the process X is said centered. Other- wise the stochastic process Y = X − mX is also second order and now centered with the same covariance function KY = KX . Hence we will mainly consider centered stochastic processes. The covariance function of a stochastic process must verify the following properties. Proposition 1. A function K : T × T ! R is a covariance function iff 1. K is symmetric ie K(t;s) = K(s;t) for all (t;s) 2 T × T; 2. K is non-negative definite: 8k ≥ 1; t1;:::;tk 2 T;: l1;:::;lk 2 R, k ∑ lil jK(ti;t j) ≥ 0: i; j=1 k Proof. The first implication is trivial once remarked the fact that Var ∑i=1 liXti = k ∑i; j=1 lil jK(ti;t j). For the converse, we need to introduce Gaussian processes. ut 1.1.2 Gaussian processes As far as second order properties are concerned the most natural class of processes are given by Gaussian ones. Definition 4. A stochastic process (Xt )t2T is a Gaussian process if for all k ≥ 1 and t1;:::;tk 2 T k (Xt1 ;:::;Xtk ) is a Gaussian vector of R ; which is equivalent to the fact that for all l1;:::;lk 2 R, the real random variable k ∑ liXti is a Gaussian variable (eventually degenerate ie constant). i=1 Note that this definition completely characterizes the distribution of the process in view of Definition 2. Proposition 2. When (Xt )t2T is a Gaussian process, (Xt )t2T is a second order pro- cess and its distribution is determined by its mean function mX : t 7! E(Xt ) and its covariance function KX : (t;s) 7! Cov(Xt ;Xs). This comes from the fact that the distribution of the Gaussian vector (Xt1 ;:::;Xtk ) is characterized by its mean (E(Xt1 );:::;E(Xtk )) = (mX (t1);:::;mX (tk)) and its co- variance matrix Cov(X ;X ) = (K (t ;t )) . ti t j 1≤i; j≤k X i j 1≤i; j≤k Again Kolmogorov’s consistency theorem (see [37] p.92 for instance) allows to prove the following existence result that finishes to prove Proposition 1. Theorem 1. Let m : T ! R and K : T × T ! R a symmetric and non-negative def- inite function, then there exists a Gaussian process with mean m and covariance K. 4 Hermine Bierme´ Let us give some insights of construction for the fundamental example of Gaussian + process, namely the Brownian motion. We set here T = R and consider (Xk)k2N a family of independent identically distributed second order random variables with E(Xk) = 0 and Var(Xk) = 1. For any n ≥ 1, we construct on T the following stochas- tic process 1 [nt] Sn(t) = p ∑ Xk: n k=1 By the central limit theorem (see [28] for instance) we clearly have for t > 0, q n d Sn(t) −! N (0;1) so that by Slutsky’s theorem (see [15] for instance) [nt] n!+¥ d Sn(t) −! N (0;t). Moreover, for k ≥ 1, if 0 < t1 < ::: < tk, by independence n!+¥ of marginals, d (Sn(t1);Sn(t2) − Sn(t1);:::;Sn(tk) − Sn(tk−1)) −! Z = (Z1;:::;Zk); n!+¥ with Z ∼ N (0;KZ) for KZ = diag(t1;t2 − t1;:::;tk − tk−1). Hence identifying the 0 1 0 ::: 0 1 B 1 1 C k × k matrix P = B C with the corresponding linear application on k, k B . .. C R @ . A 1 :::::: 1 (Sn(t1);Sn(t2);:::;Sn(tk)) = Pk(Sn(t1);Sn(t2) − Sn(t1);:::;Sn(tk) − Sn(tk−1)) d −! PkZ; n!+¥ ∗ ∗ with PkZ ∼ N (0;PkKZPk ) and PkKZPk = (min(ti;t j))1≤i; j≤k. In particular the func- tion 1 K(t;s) = min(t;s) = (t + s − jt − sj) 2 + + is a covariance function on the whole space R × R and (Sn)n converges in finite dimensional distribution to a centered Gaussian stochastic process X = (Xt )t2R+ with covariance K, known (up to continuity of sample paths) as the standard Brow- + nian motion on R . Fig. 1 Sample paths of a 1.4 Brownian motion on [0;1]. 1.2 The realization is obtained 1 using fast and exact synthesis 0.8 presented in Section 3.1.1 0.6 0.4 0.2 0 ï0.2 ï0.4 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Introduction to random fields and scale invariance 5 ( ) ( ) Now we can extend this process on R by simply considering X 1 and X 2 two + independent centered Gaussian processes on R with covariance function K and (1) (2) defining Bt := Xt for t ≥ 0, Bt := X−t for t < 0.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    49 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us