<<

The Gaussian Random Process

Perhaps the most important continuous state-space random process in communications systems in the Gaussian random process, which, we shall see is very similar to, and shares many properties with the jointly Gaussian that we studied previously (see lecture notes and chapter-4).

X(t), t ∈ T is a Gaussian r.p., if, for any positive integer n, any choice of coefficients ak, 1 k n, and any choice ≤ ≤ of sample time tk , 1 k n, the random variable given by the following weighted sum of random variables is Gaussian:∈ T ≤ ≤

X(t) = a1X(t1) + a2X(t2) + ... + anX(tn) using vector notation we can express this as follows:

X(t) = [X(t1),X(t2),...,X(tn)] which, according to our study of jointly Gaussian r.v.s is an n-dimensional Gaussian r.v.. Hence, its pdf is known since we know the pdf of a jointly Gaussian random variable. For the random process, however, there is also the nasty little parameter t to worry about!! The best way to see the connection to the Gaussian random variable and understand the pdf of a random process is by example:

Example: Determining the Distribution of a Gaussian Process

Consider a continuous time random variable X(t), t with continuous state-space (in this case amplitude) defined by the following expression: ∈ R

X(t) = Y1 + tY2 where, Y1 and Y2 are independent Gaussian distributed random variables with zero and variance 2 2 σ : Y1,Y2 N(0, σ ). The problem is to find the one and two dimensional probability density functions of the random→ process X(t):

The one-dimensional distribution of a random process, also known as the univariate distribution is denoted by the notation FX,1(u; t) and defined as: P r X(t) u . One can think of this as the distribution of a single r.v. from the collection of r.v. that{ constitute≤ } the r.p.. The general case is the n-dimensional distribution: FX,n(u1, . . . , un; t1, . . . , tn) = P r X(t1 u1),...,X(tn un) .A is completely characterized by giving its n-dimensional{ distribution≤ for each≤ positive} integer n. In the present example the one-dimensional pdf represents the distribution of a sample of X(t) which is a Gaussian random variable since it is a linear combination of two independent Gaussian random variables. For the two-dimensional distribution we consider two fixed times: t1 and t2 and note that the two resulting random variables: X(t1) and X(t2) are jointly Gaussian.

1 To find the distribution of the sample function X(t) we need to determine the mean and variance of the distribution of the weighted sum of random variables:

E[X(t)] = E[Y1] + tE[Y2] = 0 V AR[X(t)] = σ2 + t2σ2 = σ2(1 + t2)

This gives us all we need to determine the one dimensional pdf of X(t):

1 x2/(2σ2(1+t2)) fX,1(x; t) = e− 2πσ2(1 + t2) p

To find the two-dimensional pdf we fix t for each of two sample functions: X(t1) and X(t2) and determine the resulting joint Gaussian distribution. To do so we need to determine the correlation coefficient: ρ(t1, t2). Note that we have already determined the of both X(t1) and X(t2) to be 2 2 2 2 zero, and, the variances to be σ (1 + t1) and σ (1 + t2) respectively. To find the correlation coefficient we need to determine the :

COV [X(t1),X(t2)] = E[X(t1),X(t2)] E[X(t1)]E[X(t2)] = C[X(t1),X(t2)] − = E[X(t1),X(t2)] = R[X(t1),X(t2)]

= E[(Y1 + t1Y2)(Y1 + t2Y2)] 2 2 = σ + t1t2σ 2 = σ (1 + t1t2)

Given the covariance it is simple to determine the correlation coefficient:

C[X(t1),X(t2)] 1 + t1t2 ρ(t1, t2) = = V AR[X(t1)]V AR[X(t2)] 2 2 p q(1 + t1)(1 + t2)

This gives us all we need to determine the two-dimensional pdf of X(t):

2 2 2 2 1 (1 + t2)x1 2(1 + t1t2)x1x2 + (1 + t1)x2 fX;2(x1, x2; t1, t2) = exp − − 2 2 2 2πσ t1 t2 2σ (t1 t2) | − | −

2