
Performance Guarantees in Sample-Starved Space-Time Adaptive Processing Ben Robinson1 (Presenting) Robert Malinas 2 Alfred O. Hero, III2 1Air Force Research Lab, Sensors Directorate 2University of Michigan Distribution A: Approved for unlimited distribution, PA approval #'s 88ABW-2020-3037, AFRL-2020-0561 January 5, 2021 1/22 Summary • Problem • Sample-starved radar detection in Space-Time Adaptive Processing • Method • High-dimensional, random matrix theory asymptotics • Principal findings • Consistent estimates of popular algorithms' performances • Predictions matched empirically 2/22 Summary • Problem • Sample-starved radar detection in Space-Time Adaptive Processing • Method • High-dimensional, random matrix theory asymptotics • Principal findings • Consistent estimates of popular algorithms' performances • Predictions matched empirically 2/22 Summary • Problem • Sample-starved radar detection in Space-Time Adaptive Processing • Method • High-dimensional, random matrix theory asymptotics • Principal findings • Consistent estimates of popular algorithms' performances • Predictions matched empirically 2/22 Summary • Problem • Sample-starved radar detection in Space-Time Adaptive Processing • Method • High-dimensional, random matrix theory asymptotics • Principal findings • Consistent estimates of popular algorithms' performances • Predictions matched empirically 2/22 Motivation: Space-Time Adaptive Processing • Peace support operations • Multinational coalition operations • Air control • Homeland Defense • Counter Narcotics • Combat Search and Rescue • Civil Aviation • Weather Radar Microburst Detection • Automotive Collision Avoidance • Automotive Traffic Monitoring 3/22 Data Collection: Pulsed Radar Arrays1 1M. A. Richards, Jim Scheer, and William A. Holm. Principles of Modern Radar. Tes Dee Publishing Pvt. Ltd., (Published by arrangement), 2012. 4/22 Goal: Determine Which Range Cells Contain Targets Hypothesis testing formulation2, H0 : X = W H1 : X = aµ + W ; a 6= 0; where a 2 C unknown; p µ 2 C known \steering vector" with jjµjj2 = 1; p×p R 2 C Hermitian, positive-definite, unknown; −1=2 3 E(W ) = 0; cov(W ) = R; E (R W )i ≤ C 8i. 2Michael C. Wicks et al. \Space-time adaptive processing: A knowledge-based perspective for airborne radar". In: IEEE Signal Processing Magazine 23.1 (2006), pp. 51{65. 5/22 Adaptive Matched Filter • Prevailing test is the adaptive matched filter3: H −1 2 jµ R^ X j H1 T^(X ) := τ: H ^ −1 ? µ R µ H0 n • Assume fW i gi=1 i.i.d.; 3Frank C. Robey et al. \A CFAR adaptive matched filter detector". In: IEEE Transactions on aerospace and electronic systems 28.1 (1992), pp. 208{216. 6/22 Adaptive Matched Filter • Prevailing test is the adaptive matched filter3: H −1 2 jµ R^ X j H1 T^(X ) := τ: H ^ −1 ? µ R µ H0 n • Assume fW i gi=1 i.i.d.; 3 Robey et al., \A CFAR adaptive matched filter detector". 6/22 Adaptive Matched Filter • Prevailing test is the adaptive matched filter3: H −1 2 jµ R^ X j H1 T^(X ) := τ: H ^ −1 ? µ R µ H0 n • Assume fW i gi=1 i.i.d.; 3 Robey et al., \A CFAR adaptive matched filter detector". 6/22 Adaptive Matched Filter • Prevailing test is the adaptive matched filter3: H −1 2 jµ R^ X j H1 T^(X ) := τ: H ^ −1 ? µ R µ H0 n • Assume fW i gi=1 i.i.d.; 3 Robey et al., \A CFAR adaptive matched filter detector". 6/22 Technical Challenges • Clutter varies along range dimension Choosing n too large introduces heteroscedasticity to training data • Therefore, n must be small, i.e. on the order of the data dimension p Classical estimators for R such as the sample covariance 1 Pn W W H are singular or ill-conditioned n i=1 i i 7/22 Technical Challenges • Clutter varies along range dimension Choosing n too large introduces heteroscedasticity to training data • Therefore, n must be small, i.e. on the order of the data dimension p Classical estimators for R such as the sample covariance 1 Pn W W H are singular or ill-conditioned n i=1 i i 7/22 Context • Existing approaches [14, 15] lack performance guarantees. 8/22 Formulation: Sequence of problems (k) H0 : X = W k H(k) : X = aµ + W ; a 6= 0; 1 k k pk ×pk H Rk 2 C a priori equally likely to Uk Rk Uk for unitary Uk H (Rk = EW k W k with spectrum that converges to a limit) nk pk ×nk Wk := matrix(fW i;k gi=1) 2 C , (satis. independence and moment conditions given later) µ 2 pk k C H −1 2 jµ R^ X j H1 ^ : k k Tk (X ) = −1 ? τ µH R^ µ H k k k 0 • Asy(γ): The number of samples nk and the number of dimensions pk follow the proportional-growth limit pk =nk ! γ 2 (0; 1) [ (1; 1), pk ; nk ! 1 as k ! 1. 9/22 Formulation: Sequence of problems (k) H0 : X = W k H(k) : X = aµ + W ; a 6= 0; 1 k k pk ×pk H Rk 2 C a priori equally likely to Uk Rk Uk for unitary Uk H (Rk = EW k W k with spectrum that converges to a limit) nk pk ×nk Wk := matrix(fW i;k gi=1) 2 C , (satis. independence and moment conditions given later) µ 2 pk k C H −1 2 jµ R^ X j H1 ^ : k k Tk (X ) = −1 ? τ µH R^ µ H k k k 0 • Asy(γ): The number of samples nk and the number of dimensions pk follow the proportional-growth limit pk =nk ! γ 2 (0; 1) [ (1; 1), pk ; nk ! 1 as k ! 1. 9/22 Formulation: Sequence of problems (k) H0 : X = W k H(k) : X = aµ + W ; a 6= 0; 1 k k pk ×pk H Rk 2 C a priori equally likely to Uk Rk Uk for unitary Uk H (Rk = EW k W k with spectrum that converges to a limit) nk pk ×nk Wk := matrix(fW i;k gi=1) 2 C , (satis. independence and moment conditions given later) µ 2 pk k C H −1 2 jµ R^ X j H1 ^ : k k Tk (X ) = −1 ? τ µH R^ µ H k k k 0 • Asy(γ): The number of samples nk and the number of dimensions pk follow the proportional-growth limit pk =nk ! γ 2 (0; 1) [ (1; 1), pk ; nk ! 1 as k ! 1. 9/22 Shrinkage Estimation: Use Sample Eigenvectors • Generate an eigendecomposition of the sample covariance matrix 1 H H Sk = Wk Wk = Uk Λk Uk : nk • Consider shrinkage estimators of the form ^ H Rk = Uk Dk Uk for some positive-definite diagonal matrix Dk (that usually depends only on Λk ). 10/22 Shrinkage Estimation: Use Sample Eigenvectors • Generate an eigendecomposition of the sample covariance matrix 1 H H Sk = Wk Wk = Uk Λk Uk : nk • Consider shrinkage estimators of the form ^ H Rk = Uk Dk Uk for some positive-definite diagonal matrix Dk (that usually depends only on Λk ). 10/22 Asymptotic Ledoit-P´ech´eEquivalence Suppose f : [0; 1) ! (0; 1) is continuous. We call a shrinkage ^ H estimator Rk = Uk Dk (Λk )Uk f -consistent if p kdiagDk (Λk ) − f (diagΛk )k1 ! 0 as k ! 1. Let ( λ 2 ; if λ > 0 δ(λ) = j1−γ−γm˘F (λ)j 1 ; if λ = 0, (γ−1)m ˘F (0) Here, F and F are the limiting spectral distribution functions of −1 H −1 H nk Wk Wk and pk Wk Wk , resp., andm ˘G (x) is the limit of m (z) = R (t − z)−1 dG(t) (the Stieltjies transform of G) as z in G R the upper half plane approaches x 2 R from above. Theorem An estimator (LWD) of Ledoit and Wolf4 is (asymptotically) LP equivalent, meaning δ-consistent 4Olivier Ledoit and Michael Wolf. \Analytical nonlinear shrinkage estimation of large-dimensional covariance matrices". In: Annals of Statistics (Forthcoming). 11/22 Asymptotic Ledoit-P´ech´eEquivalence Suppose f : [0; 1) ! (0; 1) is continuous. We call a shrinkage ^ H estimator Rk = Uk Dk (Λk )Uk f -consistent if p kdiagDk (Λk ) − f (diagΛk )k1 ! 0 as k ! 1. Let ( λ 2 ; if λ > 0 δ(λ) = j1−γ−γm˘F (λ)j 1 ; if λ = 0, (γ−1)m ˘F (0) Here, F and F are the limiting spectral distribution functions of −1 H −1 H nk Wk Wk and pk Wk Wk , resp., andm ˘G (x) is the limit of m (z) = R (t − z)−1 dG(t) (the Stieltjies transform of G) as z in G R the upper half plane approaches x 2 R from above. Theorem An estimator (LWD) of Ledoit and Wolf4 is (asymptotically) LP equivalent, meaning δ-consistent 4Ledoit and Wolf, \Analytical nonlinear shrinkage estimation of large-dimensional covariance matrices". 11/22 Asymptotic Ledoit-P´ech´eEquivalence Suppose f : [0; 1) ! (0; 1) is continuous. We call a shrinkage ^ H estimator Rk = Uk Dk (Λk )Uk f -consistent if p kdiagDk (Λk ) − f (diagΛk )k1 ! 0 as k ! 1. Let ( λ 2 ; if λ > 0 δ(λ) = j1−γ−γm˘F (λ)j 1 ; if λ = 0, (γ−1)m ˘F (0) Here, F and F are the limiting spectral distribution functions of −1 H −1 H nk Wk Wk and pk Wk Wk , resp., andm ˘G (x) is the limit of m (z) = R (t − z)−1 dG(t) (the Stieltjies transform of G) as z in G R the upper half plane approaches x 2 R from above. Theorem An estimator (LWD) of Ledoit and Wolf4 is (asymptotically) LP equivalent, meaning δ-consistent 4Ledoit and Wolf, \Analytical nonlinear shrinkage estimation of large-dimensional covariance matrices". 11/22 Principal Findings: False-alarm rate of shrinkage estimators Recall jµH R^ −1X j2 k k T^k (X ) := : µH R^ −1µ k k k Let (k) ^ (k) pfa (τ) := Pr[Tk > τ j H0 ]: Theorem If R^ k is asymptotically LP equivalent, then (k) p −τ pfa (τ) ! e as k ! 1.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages36 Page
-
File Size-