Forward-Backward Hankel Matrix Fitting for Spectral Super-Resolution
Total Page:16
File Type:pdf, Size:1020Kb
Forward-Backward Hankel Matrix Fitting for Spectral Super-Resolution Zai Yang and Xunmeng Wu School of Mathematics and Statistics, Xi’an Jiaotong University, Xi’an 710049, China [email protected] Abstract—Hankel-based approaches form an important class which eventually results in simple algorithm design [6]–[8]. of methods for line spectral estimation within the recent spec- However, this causes a fundamental limitation of Hankel- tral super-resolution framework. However, they suffer from the based methods, to be specific, the estimated poles do not fundamental limitation that their estimated signal poles do not lie on the unit circle in general, causing difficulties of physical lie on the unit circle in general, resulting in difficulties of interpretation and performance loss. In this paper, we present physical interpretation and potential performance loss (see the a modified Hankel approach called forward-backward Hankel main context). This paper aims at resolving this fundamental matrix fitting (FB-Hankel) that can be implemented by simply problem. modifying the existing algorithms. We show analytically that In this paper, by using the classic forward-backward tech- the new approach has great potential to restrict the estimated poles on the unit circle. Numerical results are provided that nique [9], we propose a modified Hankel-based approach for corroborate our analysis and demonstrate the advantage of FB- line spectral estimation, named as forward-backward Hankel Hankel in improving the estimation accuracy. 1 matrix fitting (FB-Hankel). The core of FB-Hankel is introduc- Index Terms—Forward-backward Hankel matrix fitting, line ing a (conjugated) backward Hankel matrix which is composed spectral estimation, spectral super-resolution, Vandermonde de- of the same spectral modes as and processed jointly with composition, Kronecker’s theorem. the original (forward) Hankel matrix. We show analytically that the FB-Hankel can substantially shrink the solution space I. INTRODUCTION to approach the ground truth and has great potential to Line spectral estimation refers to the process of estimating push the estimated poles to the unit circle. An algorithm is the spectral modes of a signal given its discrete samples. In presented to solve the FB-Hankel optimization problem by particular, we have the uniformly sampled data sequence fyjg simply modifying the algorithm in [8]. Numerical simulations that is given by are presented that demonstrate the great capabilities of the K proposed approach in restricting the estimated poles on the X j yj = skzk + ej; jzkj = 1; (1) unit circle and improving the estimation accuracy. k=1 Notation: The set of real and complex numbers are denoted i2πf and respectively. Boldface letters are reserved for vectors where the poles zk = e k , k = 1;:::;K lie on the R C unit circle and have a one-to-one connection to the unknown and matrices. The complex conjugate and amplitude of scalar p a are denoted a and jaj. The transpose of matrix A is denoted frequencies fk 2 [0; 1), k = 1;:::;K, with i = −1. In T A . The jth entry of vector x is xj. The diagonal matrix (1), sk 2 C denotes the amplitude of the kth component and with vector x on the diagonal is denoted diag (x). The rank ej 2 C is noise. Line spectral estimation has a long history of research, of matrix A is denoted rank (A). and a great number of approaches have been developed, see Organization: The data model is restated and some prelim- [1]. With the development of compressed sensing and sparse inaries are presented in Section II. The FB-Hankel approach is signal processing, Candes` and many other researchers have presented in Section III. Numerical simulations are provided established the framework of spectral super-resolution, known in Section IV, and conclusions are drawn in Section V. also as gridless sparse methods, for line spectral estimation by utilizing the signal sparsity and optimization theory in the past II. PRELIMINARIES decade, see [2]. Roughly speaking, these methods follow two A. Hankel-Based Methods lines of research, positive-semidefinite- (PSD-) Toeplitz-based and Hankel-based. In particular, PSD-Toeplitz-based methods Without loss of generality, we assume the sample size N = 2 are rooted on the exact data model in (1) and a PSDness 4n+1 and the sample index j 2 {−2n; : : : ; 2ng. Let a (z) = [z−n; : : : ; zn]T that represents a sampled complex exponential constraint is introduced to capture the knowledge jzkj = 1, see [3]–[5]. Differently, Hankel-based methods drop the prior of size 2n + 1 with pole z 2 C. It is a sinusoidal wave if jzj = 1. In the absence of noise, the samples fy g can be knowledge jzkj = 1 so that the PSDness constraint is absent, j 1The research of the project was supported by the National Natural Science 2Otherwise, we can add up to three more samples such that N = 4n + 1 Foundation of China under Grants 11922116 and 61977053. that then are dealt with as missing data. 978-9-0827-9705-3 1886 EUSIPCO 2020 used to form the (2n + 1) × (2n + 1) square Hankel matrix that includes S0. By changing S0 to S1, the original problem 2 y y : : : y 3 (5) is then relaxed to the following Hankel matrix optimization −2n −2n+1 0 model: 6y−2n+1 y−2n+2 : : : y1 7 H (y) = 6 . 7 6 . .. 7 min Loss (x − y) ; subject to x 2 S1: (7) 4 . 5 x y0 y1 : : : y2n (2) The optimization model in (7) and its variants result in the K X T Hankel-based methods. = ska (zk) a (zk) k=1 B. Forward-Backward Hankel Matrix Fitting = A (z) diag (s) AT (z) ; T Let xe = [x2n;:::; x−2n] be the conjugated backward version of vector x 2 N . Let also where A (z) = [a (z1) ;:::; a (zk)] is an (2n + 1) × K C Vandermonde matrix. Therefore, if K ≤ 2n, then A (z) has N S2 = x 2 C : rank ([H (x) j H (x)]) ≤ K (8) rank K, so does H (y). The Hankel-based methods do line e spectral estimation by recovering the low-rank Hankel matrix that is a subset of S1. By changing S0 to S2, we propose the H (y) from the samples. optimization model B. Forward-Backward Processing min Loss (x − y) ; subject to x 2 S2 (9) x Consider the conjugated backward data sequence that is called forward-backward Hankel matrix fitting (FB- y : j = −2n; : : : ; 2n that in the absence of noise −j Hankel). Correspondingly, the model (7) is referred to as is given by K K forward-only Hankel matrix fitting (FO-Hankel). X −j X j For any x 2 S , by making use of (2) and (3), we have that y−j = skzk = skzk (3) 0 k=1 k=1 H (x) = A (z) diag (s) AT (z) ; (10) since jzkj = 1. Therefore, the sequence y−j corresponds to T H (xe) = A (z) diag (s) A (z) : (11) a virtual signal that is also composed of the poles fzkg. The joint operation on fyjg and y−j for line spectral estimation It follows that is called forward-backward processing that has achieved great [H (x) j H (x)] = A (z) (s) AT (z) j (s) AT (z) success in array signal processing [9]–[12]. e diag diag (12) III. FORWARD-BACKWARD HANKEL MATRIX FITTING and thus rank ([H (x) j H (xe)]) ≤ K. This implies that x 2 A. A General Framework of Spectral Super-Resolution S2 and thus In this subsection, we present a general framework of S0 ⊂ S2 ⊂ S1: (13) spectral super-resolution by exploiting signal sparsity. Assume As a relaxation of (5), consequently, the FB-Hankel model that the number of poles K is known. Let in (9) is at least as tight as the FO-Hankel model in (7) and ( K ) is expected to have higher accuracy. Indeed, the FB-Hankel N X j S0 = x 2 C : xj = skzk; sk 2 C; jzkj = 1 (4) model is a tighter relaxation, which is shown in the ensuing k=1 subsection. that denotes the set of candidate signals composed of at most C. Analysis K sinusoids. To do line spectral estimation, the framework of spectral super-resolution tells us to find some candidate signal We show in this subsection that, unlike the PSD-Toeplitz- x 2 S0 that is nearest to its noisy (or incomplete) measurement based methods, the estimated poles of FO-Hankel and FB- T y = [y−2n; : : : ; y2n] and then retrieve the estimated poles Hankel may not lie on the unit circle. Compared to FO-Hankel, from x. To be specific, we attempt to solve the following however, the proposed FB-Hankel has better capability in optimization problem: restricting the estimated poles on the unit circle. In particular, we let min Loss (x − y) ; subject to x 2 S0; (5) x ( K ) 0 N X j where Loss (·) denotes a loss function that is chosen according S1 = x 2 C : xj = skzk; sk; zk 2 C (14) to the sampling environment (e.g., noise, missing data, etc). k=1 The key underlies how to characterize the constraint x 2 S0. that is obtained by removing the constraint jzkj = 1 in S0. 0 An exact characterization of the constraint x 2 S0 is Evidently, S0 ⊂ S1 ⊂ S1. Moreover, the Kronecker’s theorem 0 provided in [4], in terms of the atomic `0 norm, by introducing [13] says that if x 2 S1 and K ≤ 2n, then x 2 S1 except 0 a PSD Toeplitz matrix. The resulting optimization problem and for degenerate cases. Therefore, S1 and S1 are approximately its variants form the PSD-Toeplitz-based methods.