
Problems in Detection and Estimation Theory Joseph A. O’Sullivan Electronic Systems and Signnals Research Laboratory Department of Electrical and Systems Engineering Washington University in St. Louis St. Louis, MO 63130 [email protected] May 4, 2006 Introduction In this document, problems in detection and estimation theory are collected. These problems are primarily written by Professor Joseph A. O’Sullivan. Most have been written for examinations ESE 524 or its pre- decessor EE 552A at Washington University in St. Louis, and are thereby copyrighted. Some come from qualifying examinations and others are simply problems from homework assignments in one of these classes. Use of these problems should include a citation to this document. In order to give some organization to these problems, they are grouped into roughly six categories: 1. basic detection theory; 2. basic estimation theory; 3. detection theory; 4. estimation theory; 5. expectation-maximization; 6. recursive detection and estimation. The separation into these categories is rather rough. Basic detection and estimation theory deal with finite dimensional observations and test knowledge of introductory, fundamental ideas. Detection and estimation theory problems are more advanced, touching on random processes, joint detection and estimation, and other important extensions of the basic theory. The use of the expectation-maximization algorithm has played an important role in research at Washington University since the early 1980’s, motivating inclusion of problems that test its fundamental understanding. Recursive estimation theory is primarily based on the Kalman filter. The recursive computation of a loglikelihood function leads to results in recursive detection. The problems are separated by theoretical areas rather than applications based on the view that theory is more fundamental. Many applications touched on here are explored in significantly more depth elsewhere. 1 Basic Detection Theory 1.1 Analytically Computable ROC Suppose that under hypothesis H1, the random variable X has probability density function 3 2 pX(x)= x , for − 1 ≤ x ≤ 1. (1) 2 1 Under hypothesis H0, the random variable X is uniformly distributed on [ -1 , 1 ]. a. Use the Neyman-Pearson lemma to determine the decision rule to maximize the probability of detection subject to the constraint that the false alarm probability is less than or equal to 0.1. Find the resulting probability of detection. b. Plot the receiver operating characteristic for this problem. Make your plot as good as possible. 1.2 Correlation Test of Two Gaussian Random Variables Suppose that X1 and X2 are jointly distributed Gaussian random variables. There are two hypotheses for their joint distribution. Under either hypothesis they are both zero mean. Under hypothesis H1,theyare independent with variances 20/9 and 5, respectively. Under hypothesis H2, X1 44 E [X1 X2] = (2) X2 49 Determine the optimal test for a Neyman-Pearson test. Sketch the form of the corresponding decision region. 1.3 Discrete-Time Exponentially Decaying Signal in AWGN Suppose that two data models are n Hypothesis 1: R(n)= 1 + W (n)(3) 4 1 n Hypothesis 2: R(n)=c 3 + W (n), (4) where under either hypothesis W (n) is a sequence of independent and identically distributed (i.i.d.) Gaussian random variables with zero mean and variance σ2; under each hypothesis, the noise W (n) is independent of the signal. The variable c and the variance σ2 are known. Assume that measurements are available for n =0, 1,...,N− 1. a. Find the loglikelihood ratio test. b. What single quantity parameterizes performance? c. What is the limiting signal to noise ratio as N goes to infinity? d. What is the value of the variable c that minimizes performance? 1.4 Two Zero Mean Gaussian Two random variables (R1,R2) are measured. Assume that under Hypothesis 0, the two random variables are independent Gaussian random variables with mean zero and variance 2. Under Hypothesis 1, the two random variables are independent Gaussian random variables with mean zero and variance 3. a. Find the decision rule that maximizes the probability of detection subject to a constraint on the probability of false alarm, PF ≤ α. b. Derive an equation for the probability of detection as a function of α. 1.5 Exponential Random Variables in Queuing In queuing systems, packets or messages are processed by blocks in the system. These processing blocks are often called queues. A common model for a queue is that the time it takes to process a message is an exponential random variable. There may be an additional model for the times at which messages enter the queue, a common model of which is a Poisson process. 2 Recall that if X is exponentially distributed with mean μ,then 1 x pX (x)= exp − ,x≥ 0. (5) μ μ Suppose that one queue is being monitored. A message enters at time t = 0 and exits at time t = T . Under Hypothesis H1, T is an exponentially distributed random variable with mean μ1; under H0, T is exponentially distributed with mean μ0. Assume that μ1 >μ0. a. Prove that the likelihood ratio test is equivalent to comparing T to a threshold γ. b. For an optimum Bayes test, find γ as a function of the costs and the a priori probabilities. c. Now assume that a Neyman-Pearson test is used. Find γ as a function of the bound on the false alarm probability PF ,wherePF = P (say H1|H0 is true). d. Plot the ROC for this problem for μ0 =1andμ1 =5. e. Now consider N independent and identically distributed measurements of T denoted T1,T2,...,TN . Show that the likelihood ratio test may be reduced to comparing N 1 l(T)= Ti (6) N i=1 to a threshold. Find the probability density function for l(T) under each hypothesis. 1.6 Gaussian Variance Test In many problems in radar, the reflectivity is a complex Gaussian random variable. Sequential measurements of a given target that fluctuates rapidly may yield independent realizations of these random variables. It may then be of interest to decide between two models for the variance. Assume that N independent measurements are made, with resulting i.i.d. random variables Ri, i = 1, 2,...,N. The models are 2 H1 : Ri is N (0,σ1),i=1, 2,...,N, (7) 2 H0 : Ri is N (0,σ0),i=1, 2,...,N, (8) where σ1 >σ0. a. Find the likelihood ratio test. b. Show that the likelihood ratio test may be simplified to comparing the sufficient statistic N 1 2 l(R)= Ri (9) N i=1 to a threshold. c. Find an expression for the probability of false alarm, PF , and the probability of miss, PM . 2 2 d. Plot the ROC for σ0 =1,σ1 =2,andN =2. 1.7 Binary Observations: Test for Bias in Coin Flips Suppose that there are only two possible outcomes of an experiment; call the outcomes heads and tails. The problem here is to decide whether the process used to generate the outcomes is fair. The hypotheses are H1 : P (Ri =heads)=p, i =1, 2,...,N (10) H0 : P (Ri =heads)=0.5,i=1, 2,...,N. (11) Under each hypothesis, the random variables Ri are i.i.d. a. Determine the optimal likelihood ratio test. Show that the number of heads is a sufficient statistic. 3 b. Note that the sufficient statistic does not depend on p, but the threshold does. For a finite number N,if only nonrandomized tests are considered, then the ROC has N +1 points on it. For N =10andp =0.7, plot the ROC for this problem. You may want to do this using a computer because you will need the cumulative distribution function for a binomial. c. Now consider a randomized test. In a randomized test, for each value of the sufficient statistic, the decision is random. Hypothesis H1 is chosen with probability φ(l)andH0 is chosen with probability 1−φ(l). Consider the Neyman-Pearson criterion with probability of false alarm PF = α. Show that the optimal randomized strategy is a probabilistic mixture of two ordinary likelihood ratio tests; the first likelihood ratio test achieves the next greater probability of false alarm than α, while the second achieves the next lower probability of false alarm than α.Findφ as a function of α. Finally, note that the resulting ROC results from the original ROC after connecting the achievable points with straight lines. 1.8 Likelihood Ratio as a Random Variable The problem follows Problem 2.2.13 from H. L. Van Trees, Vol. 1, closely. The likelihood ratio Λ(R) is a random variable p(R|H1) Λ(R)= . (12) p(R|H0) Prove the following properties of the random variable Λ. n n+1 a. E(Λ |H1)=E(Λ |H0) b. E(Λ|H0)=1. c. E(Λ|H1) − E(Λ|H0)=var(Λ|H0). 1.9 Matlab Problem Write Matlab subroutines that allow you to determine detection performance experimentally. The case studied here is the signal in additive Gaussian noise problem, for which the experimental performance should be close to optimal. 2 Let Wk be i.i.d. N (0,σ ), independent of the signal. In the general case, there can be a signal under either hypothesis, H0 : Rk = s0k + Wk,k=1, 2,...,K, (13) H1 : Rk = s1k + Wk,k=1, 2,...,K. (14) For this problem, we look at the special case of deciding whether there is one or there are two exponentials in noise, as a function of the noise level and the two exponentials. H0 : Rk =exp(−k/4) + Wk,k=1, 2,...,K, (15) H1 : Rk =exp(−k/4) + exp(−k/(4T )) + Wk,k=1, 2,...,K.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages34 Page
-
File Size-