![Lecture Series on Math Fundamentals for MIMO Communications Topic 1. Complex Random Vector and Circularly Symmetric Complex Gaussian Matrix](https://data.docslib.org/img/3a60ab92a6e30910dab9bd827208bcff-1.webp)
Lecture series - CSCG Matrix Lecture Series on Math Fundamentals for MIMO Communications Topic 1. Complex Random Vector and Circularly Symmetric Complex Gaussian Matrix Drs. Yindi Jing and Xinwei Yu University of Alberta Last Modified on: January 17, 2020 1 Lecture series - CSCG Matrix Content Part 1. Definitions and Results 1.1 Complex random variable/vector/matrix 1.2 Circularly symmetric complex Gaussian (CSCG) matrix 1.3 Isotropic distribution, decomposition of CSCG random matrix, Complex Wishart matrix and related PDFs Part 2. Some MIMO applications Part 3. References 2 Lecture series - CSCG Matrix Part 1. Definitions and Results 1.1 Complex random variable/vector/matrix Real-valued random variable • Real-valued random vector • Complex-valued random variable • Complex-valued random vector • Real-/complex-valued random matrix • 3 Lecture series - CSCG Matrix Real-valued random variable Def. A map from the sample space to the real number set Ω R. → A sample space Ω; • A probability measure P ( ) defined on Ω; • ∙ An experiment on Ω; • A function: each outcome of the experiment a real number. • 7→ Notation: random variable X; sample value x. Outcomes of the experiment: In Ω. ”Invisible”. Does not matter. What matters: The values value x assigned by the function. Examples: Flip a coin and head 0, Tail 1. → → A random bit of 0 or 1. Roll a die and even 0, odd 1. → → 4 Lecture series - CSCG Matrix How to describe a random variable? Def. The cumulative distribution function (CDF) of X: FX (x) = P (ω Ω: X(ω) x) = P (X x). ∈ ≤ ≤ Def. The probability density function (PDF) of X: dF (x) f (x) = X . X dx Keep in mind: A random variable though the result of an experiment, can be separated to the physical experiment. The values matter, not the outcomes of the experiment. In other words, ω is invisible, all one can see is x. 5 Lecture series - CSCG Matrix Common concepts, models, and properties of random variables. Discrete random variable, continuous random variable, mixed • random variable Mean, variance, moments • Multiple random variables: Joint CDF/PDF, marginal • CDF/PDF, conditional CDF/PDF, etc. Multiple random variables: Independence and correlation • 6 Lecture series - CSCG Matrix (Real-valued) random vector. X1 x1 . X = . , value x = . XK xK K dimensional. • Each X : random variable; • i Describe through joint PDF: • fX(x) = fX ,X , ,X (x1, x2, , xK ) 1 2 ∙∙∙ K ∙ ∙ ∙ 7 Lecture series - CSCG Matrix Mean vector and Covariant Matrix Mean vector. • E (X1) m1 . m = E(X) = . = . , E (XK ) mK Covariance matrix. • T ∙ ∙ ∙ ∙ ∙ ∙ ∙ ∙ ∙ Σ = E (X m)(X m) = E [(Xi mi)(Xj mj )] , { − − } ∙ ∙ ∙ − − ∙ ∙ ∙ ∙ ∙ ∙ ∙ ∙ ∙ ∙ ∙ ∙ Σ 0 (positive semidefinite matrix). 8 Lecture series - CSCG Matrix Complex-valued random variable. X = Xr + jXs. Equivalent to 2-dimensional real-valued random vector: Xr Xˆ = . Xs To describe Xˆ, use joint PDF of (X ,X ) joint PDF of the r s ⇐⇒ random vector Xˆ. fXˆ (ˆx) = fXr ,Xs (xr, xs). 9 Lecture series - CSCG Matrix Complex-valued random vector Equivalent to real-valued random vector with twice dimension: Xr X = Xr + jXs Xˆ = . ⇐⇒ Xs To describe X, use the joint PDF of (Xr, Xs) (twice the dimension of X): fXˆ (xˆ) = fXr ,Xs (xr, xs) = fX , ,X ,X , ,X (xr,1, , xr,K , xs,1, , xs,K ). r,1 ∙∙∙ r,K s,1 ∙∙∙ s,K ∙ ∙ ∙ ∙ ∙ ∙ 10 Lecture series - CSCG Matrix Real-/complex-valued random matrix: X = [x ]: M N matrix. ij × Each entry xij is a real-/complex-valued random variable. Also use X for a sample or a realization. an (MN)-dimensional real-/complex random vector. ⇐⇒ To make the difference between random vector and random variables, use x for both a random vector and its realization. Reserve X for both a random matrix and its realization. 11 Lecture series - CSCG Matrix The vectorization operation for X = [xij]. By columns: x11 . . xM1 xcol,1 . . vec(X) = . = . x1N xcol,N . . xMN By rows: vecrow(X) = x x x x 11 ∙ ∙ ∙ 1N ∙ ∙ ∙ N1 ∙ ∙ ∙ MN h i = x x . row,1 ∙ ∙ ∙ row,M h i 12 Lecture series - CSCG Matrix Remarks on the vectorization operation To describe the real-/complex-valued random matrix • X is to describe the real-/complex-valued random vector vec(X). For real-valued random matrix: joint PDF of entries of vec(X). • For complex-valued random matrix: two ways. • vec(X)r – Vectorize first, then separate. vec(\X) = . vec(X)s vec(X)r and vec(X)s: real and imaginary parts of vec(X). Xr – Separate first, then vectorize. vec(Xˆ ) = vec . Xs Equivalent. The difference is a permutation of indices, i.e., a linear transformation with a permutation matrix. 13 Lecture series - CSCG Matrix 1.2. Circularly symmetric complex Gaussian (CSCG) matrix Real-valued Gaussian random variable • Real-valued Gaussian random vector and random matrix • Complex-valued Gaussian random vector and CSCG random • vector CSCG random matrix • 14 Lecture series - CSCG Matrix Real-valued Gaussian random variable Gaussian/normal distribution: X (m, σ2). ∼ N m: mean, σ2: variance. PDF: (x m)2 − 1 − 2 fX (x) = e 2σ . √2πσ Standard Gaussian distribution: (0, 1). N Q-function: If X (0, 1), ∼ N ∞ 1 t2 Q(x) , P (X > x) = e− 2 dt. √2π Zx 15 Lecture series - CSCG Matrix Real-valued Gaussian random vector and random matrix X1 . Random vector: x = . XK Def. The random vector xis a Gaussian random vector if X ,X , ,X are jointly Gaussian. 1 2 ∙ ∙ ∙ K Joint Gaussian RVs: X ,X are jointly Gaussian if both Gaussian and • 1 2 X X ,X X are Gaussian. 1| 2 2| 1 Can be generalized to more Gaussian RVs, e.g., X ,X ,X are • 1 2 3 jointly Gaussian if each two are jointly Gaussian and (X ,X ) X , (X ,X ) X , (X ,X ) X are jointly Gaussian. 1 2 | 3 1 3 | 2 2 3 | 1 Independent Gaussian RVs are jointly Gaussian. • Linear combinations of jointly Gaussian RVs are Gaussian. • 16 Lecture series - CSCG Matrix The PDF of x (joint PDF of X , ,X ) is: • 1 ∙ ∙ ∙ K 1 1 T 1 2 (x m) Σ− (x m) fx(x) = K 1 e− − − , √2π det 2 (Σ) where m is the mean vector and Σ is the covariance matrix. Notation: x (m, Σ). • ∼ N Several special cases. • – i.i.d. (0, 1) ∼ N x 2 1 k k2 1 1 K x2 − 2 − 2 i=1 i fx(x) = K e = K e √2π √2π P – Independent only 2 K (x m ) i− i 1 i=1 2σ2 − i fx(x) = K e √2π σ σ P 1 ∙ ∙ ∙ K – Zero-mean – Other cases ... 17 Lecture series - CSCG Matrix For a random matrix X which is M N, conduct × vectorization to have vec(X), which is an MN-random vector. Work on the M N matrix X work on the • × ⇐⇒ (MN)-dimensional vector vec(X). Thus the mean vector is MN-dimensional and the • covariance matrix is (MN) (MN). × X is a Gaussian matrix if vec(X) is a Gaussian random vector. • Matrix norm distribution: if the PDF has the following • format: 1 1 T 1 tr[V− (X M) U− (X M)] e− 2 − − fX(X) = , (√2π)MN ( det(U))N ( det(V))M for some M M matrix U andp N N matrixp V. × × Equivalently, vec(X) (vec(M), V U). ∼ N ⊗ 18 Lecture series - CSCG Matrix If columns of X are i.i.d. each following (m, Σ), the • N mean vector of vec(X) is m˜ = [mt, , mt]t and the covariance ∙ ∙ ∙ matrix is Σ˜ = diag Σ, , Σ = I Σ. { ∙ ∙ ∙ } ⊗ – Notice that m is M-dimensional and Σ is M M. × – PDF: fX(X) = fvec(X)(vec(X)) 1 1 T ˜ 1 2 (vec(x) m˜ ) Σ− (vec(x) m˜ ) = MN 1 e− − − √2π det 2 (Σ˜ ) N 1 1 T 1 2 (xcol,n m) Σ− (xcol,n m) = M 1 e− − − n=1 √2π det 2 (Σ) Y 1 T 1 1 tr (X M) Σ− (X M) − 2 [ − − ] = MN N e , √2π det 2 (Σ) where M = [m, , m]. ∙ ∙ ∙ 19 Lecture series - CSCG Matrix – For this special case, 1 T Σ = E [(X M)(X M) ]. N − − 1 T – For zero-mean (m = 0), Σ = N E [XX ] and 1 T 1 1 tr X Σ− X − 2 [ ] fX(X) = MN N e . √2π det 2 (Σ) – Many work use the simplified notation X (m, Σ) ∼ CN for this case. Cannot use this for the general case. ∗ Understand what it really means. ∗ Rigorously speaking, Σ is not the covariance matrix of X. ∗ For the general case, the right-hand-side of the top formula is ∗ the average covariance matrix of the columns of X. 20 Lecture series - CSCG Matrix If rows of X are i.i.d. each following (m, Σ), • N – Notice that m is N-dimensional (column vector) and Σ is N N. × – PDF: 1 T 1 T T 1 tr (X M )Σ− (X M ) − 2 [ − − ] fX(X) = MN M e . √2π det 2 (Σ) – For zero-mean, 1 T Σ = E [X X] M and 1 1 T 1 tr XΣ− X − 2 [ ] fX(X) = MN M e . √2π det 2 (Σ) – Difference to the i.i.d. column case isa( )T -operation. ∙ – Be careful with the position of ( )T and understand why. ∙ 21 Lecture series - CSCG Matrix Complex-valued Gaussian random vector and circularly symmetric complex Gaussian vector Def. A K-dimensional complex-valued random vector x = xr + jxs is a complex Gaussian random vector if xr xˆ = is a real-valued Gaussian random vector. xs Def.The complex-valued Gaussian random vector x is circularly symmetric if T 1 Re Q Im Q Σxˆ = E (xˆ E[xˆ])(xˆ E[xˆ]) = { } − { } { − − } 2 Im Q Re Q { } { } for some K K (Hermitian and) positive-semi-definite matrix Q, × i.e., Q 0.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages67 Page
-
File Size-