Spectral Theorem Approach to the Characteristic Function of Quantum

Total Page:16

File Type:pdf, Size:1020Kb

Spectral Theorem Approach to the Characteristic Function of Quantum Communications on Stochastic Analysis Volume 13 | Number 2 Article 3 6-2019 Spectral Theorem Approach to the Characteristic Function of Quantum Observables Andreas Boukas Universitá di Roma Tor Vergata, Via di Torvergata, Roma, Italy, [email protected] Philip J. Feinsilver Department of Mathematics, Southern Illinois University, Carbondale, Illinois, USA, [email protected] Follow this and additional works at: https://digitalcommons.lsu.edu/cosa Part of the Analysis Commons, and the Other Mathematics Commons Recommended Citation Boukas, Andreas and Feinsilver, Philip J. (2019) "Spectral Theorem Approach to the Characteristic Function of Quantum Observables," Communications on Stochastic Analysis: Vol. 13 : No. 2 , Article 3. DOI: 10.31390/cosa.13.2.03 Available at: https://digitalcommons.lsu.edu/cosa/vol13/iss2/3 Communications on Stochastic Analysis Vol. 13, No. 2 (2019) Article 3 (27 pages) DOI: 10.31390/cosa.13.2.03 SPECTRAL THEOREM APPROACH TO THE CHARACTERISTIC FUNCTION OF QUANTUM OBSERVABLES ANDREAS BOUKAS* AND PHILIP FEINSILVER Abstract. Using the spectral theorem we compute the Quantum Fourier Transform or Vacuum Characteristic Function hΦ; eitH Φi of an observable H defined as a self-adjoint sum of the generators of a finite-dimensional Lie algebra, where Φ is a unit vector in a Hilbert space H. We show how Stone's formula for computing the spectral resolution of a Hilbert space self-adjoint operator, can serve as an alternative to the traditional reliance on splitting or disentanglement formulas for the operator exponential. 1. Introduction The simplest quantum analogue of a classical probability space (Ω; σ; µ) where Ω is a set, σ is a sigma-algebra of subsets of Ω and µ is a measure defined on σ with µ(Ω) = 1, is a finite dimensional quantum probability space [14] defined as a triple (H; P(H); ρ) where H is a finite dimensional Hilbert space, P(H) is the set of projections (called events) E : H!H and ρ : H!H is a state on H, i.e., a positive operator of unit trace. We call trρE the probability of the event E in the state ρ. For a quantum observable H, i.e. for a symmetric or Hermitian matrix H, the characteristic function or Fourier transform of H in the state ρ is defined as trρeitH. If ρ is a pure state defined in terms of a unit vector u, i.e., if ρ = juihuj itH then the characteristic functionP of H in the state defined by u is hu; e ui. By the spectral theorem, if H = λnEn then for every continuous function ϕ : R ! C, n X ϕ(H) = ϕ(λn)En: n Therefore, for ϕ(H) = eitH we have X X itH itλn itλn hu; e ui = hu; e Enui = e hu; Enui; n n where we have assumed that the inner product is linear in the second and conjugate linear in the first argument. If the Hilbert space H is infinite dimensional then the Received 2019-7-1; Accepted 2019-8-23; Communicated by the editors. 2010 Mathematics Subject Classification. Primary 81Q10, 47B25, 47B15, 47A10; Secondary 47B40, 47B47, 80M22. Key words and phrases. Quantum Fourier transform, vacuum characteristic function, quan- tum observable, Stone's formula, spectral resolution, spectral integral, unbounded self-adjoint operator, multiplication and differentiation operator, operator exponential, Lie algebra, Heisen- berg algebra, splitting formula, disentanglement. * Corresponding author. 1 2 ANDREAS BOUKAS AND PHILIP FEINSILVER above sums are replaced by spectral integrals with respect to a resolution of the identity fEλ : λ 2 Rg and we have the corresponding formulas Z Z H = λ dEλ ; ϕ(H) = ϕ(λ) dEλ R R and Z itH itλ hu; e ui = e dhu; Eλui : R For compact self-adjoint operators H, the above spectral integrals are reduced to finite or infinite sums over the nonzero eigenvalues of H, see e.g. [17], Theorem 4.2. The above probabilistic interpretation is based on Bochner's theorem (see [18] p. 346) which states that a positive definite continuous function f : R 7! C , i.e., a continuous function f such that Z Z f(t − s)ϕ(t)ϕ¯(s) dt ds ≥ 0; R R for every continuous function ϕ : R 7! C with compact support, can be represented as Z f(t) = eitλ dv(λ); R where v is a non-decreasing right-continuous bounded function. If f(0) = 1 then such a function v defines a probability measure on R and Bochner's theorem says that f is the Fourier transform of a probability measure, i.e., the characteristic function of a random variable that follows the probability distribution defined by v. Moreover, the condition of positive definiteness of f is necessary and sufficient for such a representation. The function f(t) = hu; eitH ui, where u is a unit vector and H is a self-adjoint operator as described above, is an example of such a positive definite function. In this paper we use this spectral theorem based approach to compute the characteristic function of several quantum random variables H defined as self- adjoint sums of the generators of some finite dimensional Lie algebras of interest in quantum mechanics (the only reason why a Lie structure is assumed is because splitting the exponential of a sum of operators is usually done through a Campbell- Baker-Hausdorff type formula that relies on commutation relations). n n If H = R or H = C for vectors u = (u1; :::; un) and v = (v1; :::; vn) in H we will use the standard inner products hu; vi = uvT and hu; vi =uv ¯ T respectively. The identity matrix/operator is denoted by I, while δ denotes Dirac's delta function defined, for a test function ϕ, by Z Z δ(x − a)ϕ(x) dx = δ(a − x)ϕ(x) dx = ϕ(a): R R We define the Fourier transform of f by Z 1 f^(t) = (Uf)(t) = (2π)−1=2 eiλtf(λ)dλ, −∞ SPECTRAL THEOREM APPROACH TO QUANTUM OBSERVABLES 3 and the inverse Fourier transform of f^ by ( ) Z 1 f(λ) = U −1f^ (λ) = (2π)−1=2 e−iλtf^(t)dt; −∞ so the inverse Fourier transform of (2π)−1=2 hu; eitH ui gives the probability density function p(λ) of H. For a 2 C, let R(a; H) = (a − H)−1 denote the resolvent of an operator H. The spectral resolution fEλ j λ 2 Rg of a bounded or unbounded self-adjoint operator H in a complex separable Hilbert Space H, is given by Stone's formula (see [5], Theorems X.6.1 and XII.2.10)) Z 1 b−δ E ((a; b)) = lim lim (R(t − ϵi; H) − R(t + ϵi; H)) dt; + + δ!0 ϵ!0 2πi a+δ − where (a; b) is the open interval a < λ < b, R(t ϵi; H) = (t ϵi − H) 1, and the limit is in the strong operator topology. For a ! −∞ and b = λ we have Eλ = E ((−∞; λ]) = lim E ((−∞; λ + ρ)) ρ!0+ Z 1 λ+ρ−δ = lim lim lim (R(t − ϵi; H) − R(t + ϵi; H)) dt: ρ!0+ δ!0+ ϵ!0+ 2πi −∞ In particular (see [16], Theorem 4.31), for f; g 2 H, Z 1 λ+ρ−δ hf; Eλgi = lim lim lim hf; (R(t − ϵi; H) − R(t + ϵi; H)) gi dt: ρ!0+ δ!0+ ϵ!0+ 2πi −∞ Thus, for a unit vector u, the vacuum resolution of the identity (terminology coming from the case when u = Φ, the vacuum vector in a Fock space) of the operator H is given by Z 1 λ+ρ−δ hu; Eλui = lim lim lim hu; (R(t − ϵi; H) − R(t + ϵi; H)) ui dt: ρ!0+ δ!0+ ϵ!0+ 2πi −∞ (1.1) In Section 9 we will show how, using formula (1.1), we can avoid the reliance on splitting or disentanglement lemmas, such as Lemma 8.3 of Section 8 for the splitting of operator exponentials, in order to compute the characteristic function of a quantum random variable. In particular, Stone's formula frees us from any dependence on Lie algebraic structures. However, the difficulty of obtaining a splitting lemma, is replaced by that of computing the resolvent and the resulting spectral integrals. 2. Quantum Observables in sl(2; R) The Lie algebra sl(2; R) of real (2 × 2) matrices of zero trace, is generated [7] by the matrices ( ) ( ) ( ) 0 0 0 1 1 0 ∆ = ;R = ; ρ = ; −1 0 0 0 0 −1 4 ANDREAS BOUKAS AND PHILIP FEINSILVER with commutation relations [∆;R] = ρ, [ρ, R] = 2R; [ρ, ∆] = −2∆: We notice that the matrix ( ) 1 1 H = R − ∆ + ρ = 1 −1 is real symmetric, thus it is a quantum observable. Its eigenvalues are p p λ1 = − 2; λ2 = 2; with corresponding eigenspaces ( p ) 1 − 2 V = fxv : v = ; x 2 Rg; 1 1 1 1 ( p ) 1 + 2 V = fxv : v = ; x 2 Rg; 2 1 2 1 corresponding normalized basic eigenvectors 0 p 1 0 p 1 1− 2 1+ 2 p p p p B 4−2 2 C B 4+2 2 C v1 B C v2 B C u1 = jj jj = @ A ; u2 = jj jj = @ A ; v1 1 v2 1 p p p p 4−2 2 4+2 2 and eigen-projections 0 p 1 1 (2 − 2) − p1 B 4 2 2 C h i T E1 = u1; u1 = u1 u1 = @ A ; − p1 1p 2 2 4−2 2 0 p 1 1 (2 + 2) p1 B 4 2 2 C h i T E2 = u2; u2 = u2 u2 = @ A : p1 1p 2 2 4+2 2 We notice that E1 and E2 are a resolution of the identity, i.e., I = E1 + E2 and H = λ1E1 + λ2E2: Moreover 0 p p p 1 cos( 2t) + i p1 sin( 2t) i p1 sin( 2t) 2 2 itH B C e = @ p p p A : i p1 sin( 2t) cos( 2t) − i p1 sin( 2t) 2 2 SPECTRAL THEOREM APPROACH TO QUANTUM OBSERVABLES 5 If u = (a; b) is a unit vector in R2 then for t 2 R, itH itλ1 itλ2 itλ1 T itλ2 T hu; e ui = e hu; E1ui + e hu; E2ui = e u E1u + e u E2u p a2 − b2 + 2ab p = (a2 + b2) cos( 2t) + i p sin( 2t) 2 p a2 − b2 + 2ab p = cos( 2t) + i p sin( 2t); 2 so H follows a Bernoulli distribution with probability density function (( p ) p 1 2 2 pa;b(λ) = 2 + 2(a + 2ab − b ) δ( 2 − λ) 4 ( p ) p ) + 2 − 2(a2 + 2ab − b2) δ( 2 + λ) ; p p that is, H takes the values λ1 = − 2 and λ2 = 2 with probabilities ( ) p 1 p P (H = − 2) = 2 − 2(a2 + 2ab − b2) 4 and ( ) p 1 p P (H = 2) = 2 + 2(a2 + 2ab − b2) 4 respectively.
Recommended publications
  • Uniqueness of the Gaussian Orthogonal Ensemble
    Uniqueness of the Gaussian Orthogonal Ensemble ´ Jos´e Angel S´anchez G´omez∗, Victor Amaya Carvajal†. December, 2017. A known result in random matrix theory states the following: Given a ran- dom Wigner matrix X which belongs to the Gaussian Orthogonal Ensemble (GOE), then such matrix X has an invariant distribution under orthogonal conjugations. The goal of this work is to prove the converse, that is, if X is a symmetric random matrix such that it is invariant under orthogonal conjuga- tions, then such matrix X belongs to the GOE. We will prove this using some elementary properties of the characteristic function of random variables. 1 Introduction We will prove one of the main characterization of the matrices that belong to the Gaussian Orthogonal Ensemble. This work was done as a final project for the Random Matrices graduate course at the Centro de Investigaci´on en Matem´aticas, A.C. (CIMAT), M´exico. 2 Background I We will denote by n m(F), the set of matrices of size n m over the field F. The matrix M × × Id d will denote the identity matrix of the set d d(F). × M × Definition 1. Denote by (n) the set of orthogonal matrices of n n(R). This means, ⊺ O ⊺ M × if O (n) then O O = OO = In n. ∈ O × ⊺ arXiv:1901.09257v3 [math.PR] 1 Oct 2019 Definition 2. A symmetric matrix X is a square matrix such that X = X. We will denote the set of symmetric n n matrices by Sn. × Definition 3. [Gaussian Orthogonal Ensemble (GOE)].
    [Show full text]
  • Chap 3 : Two Random Variables
    Chap 3: Two Random Variables Chap 3 : Two Random Variables Chap 3.1: Distribution Functions of Two RVs In many experiments, the observations are expressible not as a single quantity, but as a family of quantities. For example to record the height and weight of each person in a community or the number of people and the total income in a family, we need two numbers. Let X and Y denote two random variables based on a probability model .(Ω,F,P). Then x2 P (x1 <X(ξ) ≤ x2)=FX (x2) − FX (x1)= fX (x) dx x1 and y2 P (y1 <Y(ξ) ≤ y2)=FY (y2) − FY (y1)= fY (y) dy y1 What about the probability that the pair of RVs (X, Y ) belongs to an arbitrary region D?In other words, how does one estimate, for example P [(x1 <X(ξ) ≤ x2) ∩ (y1 <Y(ξ) ≤ y2)] =? Towards this, we define the joint probability distribution function of X and Y to be FXY (x, y)=P (X ≤ x, Y ≤ y) ≥ 0 (1) 1 Chap 3: Two Random Variables where x and y are arbitrary real numbers. Properties 1. FXY (−∞,y)=FXY (x, −∞)=0,FXY (+∞, +∞)=1 (2) since (X(ξ) ≤−∞,Y(ξ) ≤ y) ⊂ (X(ξ) ≤−∞), we get FXY (−∞,y) ≤ P (X(ξ) ≤−∞)=0 Similarly, (X(ξ) ≤ +∞,Y(ξ) ≤ +∞)=Ω,wegetFXY (+∞, +∞)=P (Ω) = 1. 2. P (x1 <X(ξ) ≤ x2,Y(ξ) ≤ y)=FXY (x2,y) − FXY (x1,y) (3) P (X(ξ) ≤ x, y1 <Y(ξ) ≤ y2)=FXY (x, y2) − FXY (x, y1) (4) To prove (3), we note that for x2 >x1 (X(ξ) ≤ x2,Y(ξ) ≤ y)=(X(ξ) ≤ x1,Y(ξ) ≤ y) ∪ (x1 <X(ξ) ≤ x2,Y(ξ) ≤ y) 2 Chap 3: Two Random Variables and the mutually exclusive property of the events on the right side gives P (X(ξ) ≤ x2,Y(ξ) ≤ y)=P (X(ξ) ≤ x1,Y(ξ) ≤ y)+P (x1 <X(ξ) ≤ x2,Y(ξ) ≤ y) which proves (3).
    [Show full text]
  • A Brief on Characteristic Functions
    Missouri University of Science and Technology Scholars' Mine Graduate Student Research & Creative Works Student Research & Creative Works 01 Dec 2020 A Brief on Characteristic Functions Austin G. Vandegriffe Follow this and additional works at: https://scholarsmine.mst.edu/gradstudent_works Part of the Applied Mathematics Commons, and the Probability Commons Recommended Citation Vandegriffe, Austin G., "A Brief on Characteristic Functions" (2020). Graduate Student Research & Creative Works. 2. https://scholarsmine.mst.edu/gradstudent_works/2 This Presentation is brought to you for free and open access by Scholars' Mine. It has been accepted for inclusion in Graduate Student Research & Creative Works by an authorized administrator of Scholars' Mine. This work is protected by U. S. Copyright Law. Unauthorized use including reproduction for redistribution requires the permission of the copyright holder. For more information, please contact [email protected]. A Brief on Characteristic Functions A Presentation for Harmonic Analysis Missouri S&T : Rolla, MO Presentation by Austin G. Vandegriffe 2020 Contents 1 Basic Properites of Characteristic Functions 1 2 Inversion Formula 5 3 Convergence & Continuity of Characteristic Functions 9 4 Convolution of Measures 13 Appendix A Topology 17 B Measure Theory 17 B.1 Basic Measure Theory . 17 B.2 Convergence in Measure & Its Consequences . 20 B.3 Derivatives of Measures . 22 C Analysis 25 i Notation 8 For all 8P For P-almost all, where P is a measure 9 There exists () If and only if U Disjoint union
    [Show full text]
  • Random Variables and Probability Distributions
    Schaum's Outline of Probability and Statistics CHAPTERCHAPTER 122 Random Variables and Probability Distributions Random Variables Suppose that to each point of a sample space we assign a number. We then have a function defined on the sam- ple space. This function is called a random variable (or stochastic variable) or more precisely a random func- tion (stochastic function). It is usually denoted by a capital letter such as X or Y. In general, a random variable has some specified physical, geometrical, or other significance. EXAMPLE 2.1 Suppose that a coin is tossed twice so that the sample space is S ϭ {HH, HT, TH, TT}. Let X represent the number of heads that can come up. With each sample point we can associate a number for X as shown in Table 2-1. Thus, for example, in the case of HH (i.e., 2 heads), X ϭ 2 while for TH (1 head), X ϭ 1. It follows that X is a random variable. Table 2-1 Sample Point HH HT TH TT X 2110 It should be noted that many other random variables could also be defined on this sample space, for example, the square of the number of heads or the number of heads minus the number of tails. A random variable that takes on a finite or countably infinite number of values (see page 4) is called a dis- crete random variable while one which takes on a noncountably infinite number of values is called a nondiscrete random variable. Discrete Probability Distributions Let X be a discrete random variable, and suppose that the possible values that it can assume are given by x1, x2, x3, .
    [Show full text]
  • Supplementary Readings on Probability (PDF)
    Optical Propagation, Detection, and Communication Jeffrey H. Shapiro Massachusetts Institute of Technology c 1988,2000 Chapter 3 Probability Review The quantitative treatment of our generic photodetector model will require the mathematics of probability and random processes. Although the reader is assumed to have prior acquaintance with the former, it is nevertheless worth- while to furnish a high-level review, both to refresh memories and to establish notation. 3.1 Probability Space Probability is a mathematical theory for modeling and analyzing real-world situations, often called experiments, which exhibit the following attributes. The outcome of a particular trial of the experiment appears to be ran- • dom.1 In a long sequence of independent, macroscopically identical trials of the • experiment, the outcomes exhibit statistical regularity. Statements about the average behavior of the experimental outcomes are • useful. To make these abstractions more explicit, consider the ubiquitous introductory example of coin flipping. On a particular coin flip, the outcome—either heads (H) or tails (T )—cannot be predicted with certainty. However, in a long 1We use the phrase appears to be random to emphasize that the indeterminacy need not be fundamental, i.e., that it may arise from our inability—or unwillingness—to specify microscopic initial conditions for the experiment with sufficient detail to determine the outcome precisely. 37 38 CHAPTER 3. PROBABILITY REVIEW sequence of N independent, macroscopically identical coin flips, the relative frequency of getting heads, i.e., the fraction of flips which come up heads, N(H) f (H) , (3.1) N ≡ N where N(H) is the number of times H occurs, stabilizes to a constant value as N .
    [Show full text]
  • Curiosities of Characteristic Functions 1 Introduction
    EXPOSITIONES Expo. Math, 19 (2001): 359-363 © Urban & Fischer Verlag MATHEMATICAE www. urbanfischer.de/journals/expomath Mathematical Note Curiosities of Characteristic Functions Tilmann Gneiting Department of Statistics, University of Washington, BOX 354322, Seattle, Washington 98195, USA Abstract We discuss certain types of characteristic functions with surprising and in- teresting properties, settle two problems of Ushakov, relate to an observation by Ramachandran, and pose new questions. Key words and phrases: Characteristic function; Extension problem; Level set; Substitution property. 1 Introduction If P denotes a probability distribution on the real line R, we call the complex-valued function ~(t) = exp(itx) alP(x), t C N, /2OO the characteristic function associated with P. Characteristic functions play crucial roles in probability theory and mathematical statistics. Some of their properties are of fundamental importance and give rise to key results, such as the central limit theorem. Other properties are rather viewed as curiosities, form highlights in the classical books of Feller (1966, pp. 505-507) and Lukacs (1970, p. 85), and continue to delight the reader. Feller and Lukacs construct distinct characteristic functions that coincide within a finite interval. Ushakov (1999, pp. 264 and 334) goes a step further. He asks for a characterization of the class B of all sets B C ]R for which there exist characteristic functions ~(t) and ¢(t) such that ~(t) = ¢(t) if and only if t • B. Theorem 1 For each symmetric closed subset of the real line that contains the origin there exist characteristic functions p(t) and ¢(t) such that ~(t) = "~(t) on this set and ~(t) • ¢(t) otherwise.
    [Show full text]
  • On Analytic Characteristic Functions
    Pacific Journal of Mathematics ON ANALYTIC CHARACTERISTIC FUNCTIONS EUGENE LUKACS AND OTTO SZASZ´ Vol. 2, No. 4 April 1952 ON ANALYTIC CHARACTERISTIC FUNCTIONS EUGENE LUKACS AND OTTO SZA'SZ 1. Introduction. In this paper we discuss certain properties of characteristic functions. Theorem 1 gives a sufficient condition on the characteristic function of a distribution in order that the moments of the distribution should exist. The existence of the moments is usually proven under the assumption that the charac- teristic function is differentiable [4]. The condition of Theorem 1 is somewhat more general and the proof shorter and more elementary. The remaining theorems deal with analytic characteristic functions, and again some known results are proved in a simple manner. Some applications are discussed; in particular, it is shown that an analytic characteristic function of an infinitely divisible law can have no zeros inside its strip of convergence. This property is used to construct an example where an infinitely divisible law (the Laplace distribution) is fac- tored into two noninfinitely divisible factors. 2. An existence theorem. Let F(x) be a probability distribution, that is, a never-decreasing, right-continuous function such that F(-co) = 0 and F ( + oo) = 1. The Fourier transform of F(x), that is, the function (1.1) φ(t) = Γ+O° eitxdF(x), is called the characteristic function of the distribution F{x), The characteristic function exists for real values of t for any distribution, but the integral (1.1) does not always exist for complex t. This paper deals mostly with characteristic functions which are analytic in a neighborhood of the origin.
    [Show full text]
  • Introduction to Probability and Random Processes
    APPENDIX H INTRODUCTION TO PROBABILITY AND RANDOM PROCESSES This appendix is not intended to be a definitive dissertation on the subject of random processes. The major concepts, definitions, and results which are employed in the text are stated 630 PROBABILITY AND RANDOM PROCESSES if the events A, B, C,etc., are mutually independent. Actually, the mathe- matical definition of independence is the reverse of this statement, but the result of consequence is that independence of events and the multiplicative property of probabilities go together. RANDOM VARIABLES A random variable Xis in simplest terms a variable which takes on values at random; it may be thought of as a function of the outcomes of some random experiment. The manner of specifying the probability with which different values are taken by the random variable is by the probability distribution function F(x), which is defined by or by the probability density function f(x), which is defined by The inverse of the defining relation for the probability density function is (H-4) An evident characteristic of any probability distribution or density function is From the definition, the interpretation off (x) as the density of probability of the event that X takes a value in the vicinity of x is clear. F(x + dx) - F(x) f(x) = lim &+o dx P(x < X s x + dx) = lim dx40 dx This function is finite if the probability that X takes a value in the infinitesimal interval between x and x + dx (the interval closed on the right) is an infini- tesimal of order dx.
    [Show full text]
  • The Method of Characteristic Functionals Yu
    THE METHOD OF CHARACTERISTIC FUNCTIONALS YU. V. PROHOROV MATHEMATICAL INSTITUTE ACADEMY OF SCIENCES OF THE USSR 1. Introduction The infinite-dimensional analogue of the concept of characteristic func- tion, namely the characteristic functional (ch.f.), was first introduced by A. N. Kolmogorov as far back as 1935 [31], for the case of distributions in Banach space. This remained an isolated piece of work for a long time. Only in the last decade have characteristic functionals again attracted the attention of mathe- maticians. Among the works devoted to this subject I shall note here in the first place those of E. Mourier and R. Fortet (see for instance [14], [15], [32], where further references are given) and the fundamental investigations of L. Le Cam [1]. The special case of distributions in Hilbert space is considered in detail, for instance, in the author's work [22]. Let X be a linear space and let 3 be a locally convex topology in this space, let X* be the space dual to (X, 5), that is, the linear space whose elements x* are continuous linear functionals over (X, 5). It is quite natural that the following two questions occupy a central position in the general theory. In the first place, when is a nonnegative definite function x(x*) a ch.f. of some CT-additive measure? Secondly, how can the conditions of weak convergence of distributions be ex- pressed in terms of ch.f.? The content of this article is, in fact, connected with these two questions. Sections 2 and 3 are of auxiliary character.
    [Show full text]
  • Random Tensors and Their Normal Distributions
    Random Tensors and their Normal Distributions Changqing Xu,∗ Ziming Zhang† School of Mathematics and Physics Suzhou University of Science and Technology, Suzhou, China November 12, 2019 Abstract The main purpose of this paper is to introduce the random tensor with normal distribution, which promotes the matrix normal distri- bution to a higher order case. Some basic knowledge on tensors are introduced before we focus on the random tensors whose entries follow normal distribution. The random tensor with standard normal distri- bution(SND) is introduced as an extension of random normal matrices. As a random multi-array deduced from an affine transformation on a SND tensor, the general normal random tensor is initialised in the paper. We then investigate some equivalent definitions of a normal tensor and present the description of the density function, characteris- tic function, moments, and some other functions related to a random matrix. A general form of an even-order multi-variance tensor is also introduced to tackle a random tensor. Also presented are some equiv- alent definitions for the tensor normal distribution. We initialize the definition of high order standard Gaussian tensors, general Gaussian tensors, deduce some properties and their characteristic functions. keywords: Random matrix; random tensor; Gaussian matrix; Gaussian tensor; Characteristic function. AMS Subject Classification: 53A45, 15A69. arXiv:1908.01131v2 [math.ST] 11 Nov 2019 1 Introduction The systematic treatment of multivariate statistics through matrix theory, treating the multivariate statistics with a compact way, has been developed since 1970s [5, 29]. In [16] Kollo and Rosen introduce the matrix normal dis- tribution as the extension of the ordinary multivariate normal one.
    [Show full text]
  • Some Problems with Nonsampling Errors in Receiver Operating Characteristic Studies Marie Ann Coffin Iowa State University
    Iowa State University Capstones, Theses and Retrospective Theses and Dissertations Dissertations 1994 Some problems with nonsampling errors in receiver operating characteristic studies Marie Ann Coffin Iowa State University Follow this and additional works at: https://lib.dr.iastate.edu/rtd Part of the Statistics and Probability Commons Recommended Citation Coffin,a M rie Ann, "Some problems with nonsampling errors in receiver operating characteristic studies " (1994). Retrospective Theses and Dissertations. 10591. https://lib.dr.iastate.edu/rtd/10591 This Dissertation is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Retrospective Theses and Dissertations by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. UMI MICROFILMED 1994 INFORMATION TO USERS This manuscript has been reproduced from the microfilm master. UMI films the text directly firom the original or copy submitted. Thus, some thesis and dissertation copies are in typewriter face, while others may be from any type of computer printer. The quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleedthrough, substandard margins, and improper alignment can adversely affect reproduction. In the unlikely event that the author did not send UMI a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note will indicate the deletion. Oversize materials (e.g., maps, drawings, charts) are reproduced by sectioning the original, beginning at the upper left-hand comer and continuing firom left to right in equal sections with small overlaps.
    [Show full text]
  • Characteristic Functions Contents
    Characteristic functions 1/13/12 Literature Rick Durrett, Probability: Theory and Examples, Duxbury 2010. Olav Kallenberg, Foundations of Modern Probability, Springer 2001, Eugene Lukacs, Characteristic Functions, Hafner 1970. Robert G. Bartle, The Elements of Real Analysis, Wiley 1976. Michel Lo`eve, Probability Theory, Van Nostrand 1963. Contents 1 Definition and basic properties 2 2 Continuity 4 3 Inversion 7 4 CLT 9 4.1 The basic CLT . 9 4.2 Lindeberg-Feller Condition . 11 4.3 Poisson convergence . 14 4.4 Exercises . 17 5 Poisson random measure 19 5.1 Poisson measure and integral . 21 5.2 About stochastic processes . 22 5.3 Classical Poisson Process . 24 5.4 Transformations of Poisson process . 26 5.4.1 Nonhomogeneous Poisson process . 26 5.4.2 Reward or compound Poisson process . 27 5.5 A few constructions of Poisson random measure . 29 5.5.1 adding new atoms . 29 5.5.2 gluing the pieces . 30 5.5.3 using a density of a random element . 31 5.6 Exercises . 32 5.7 Non-positive awards . 34 5.8 SSα - symmetric α-stable processes . 37 1 5.9 Exercises . 39 6 Infinitely divisible distributions 40 6.1 Preliminaria . 40 6.2 A few theorems . 41 6.3 A side trip: decomposable distributions . 42 6.4 ID of Poisson type . 44 6.5 L´evy-Khinchin formula . 45 6.6 Exercises . 48 1 Definition and basic properties Let µ be a probability distribution of a random variable X. The characteristic function, a.k.a. Fourier transform, is the complex valued one-parameter function Z µ^(t) = '(t) = e{tx µ(dx) = E eitX : R L Rd Similarly weD defineE the ch.f.
    [Show full text]