ONLINE DOMINANT GENERALIZED EIGENVECTORS EXTRACTION VIA A RANDOMIZED METHOD
Haoyuan Cai ⇤, Maboud F. Kaloorazi ⇤, Jie Chen ⇤, Wei Chen † and Cedric´ Richard ‡
⇤ Center of Intelligent Acoustics and Immersive Communications (CIAIC) School of Marine Science and Technology, Northwestern Polytechnical University, China † State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, China † Univiersite´ Cote d’Azur, France Emails: [email protected], [email protected], [email protected], [email protected], [email protected]
ABSTRACT because of narrow search space [19]. Tanaka [19] developed an on- line algorithm based on the power iteration scheme. To track the The generalized Hermitian eigendecomposition problem is ubiqui- r-dominant generalized eigenvectors, however, this method needs tous in signal and machine learning applications. Considering the (rN2) operations in each iteration (with N being the dimension need of processing streaming data in practice and restrictions of ex- ofO an input matrix), which is still computationally expensive. isting methods, this paper is concerned with fast and efficient gener- The generalized eigenvalues and eigenvectors are extracted from alized eigenvectors tracking. We first present a computationally effi- a matrix pencil (A, B). In online applications [2,4,6–10], however, cient algorithm based on randomization termed alternate-projections this pair is unknown, and the rank-1 update strategy [14–16, 18, 19] randomized eigenvalue decomposition (APR-EVD) to solve a stan- uses the observed streaming stochastic signals to estimate it. Also, dard eigenvalue problem. By exploiting rank-1 strategy, two online in many cases, the signal subspace spanned by the dominant gener- algorithms based on APR-EVD are developed for the dominant gen- alized eigenvectors, lies in a low-dimensional space [10]. This im- eralized eigenvectors extraction. Numerical examples show the prac- plies that low-rank approximation techniques can be applied to treat tical applicability and efficacy of the proposed online algorithms. GHEPs. Recent low-rank approximation methods based on random- Index Terms— Randomized algorithms, dominant generalized ized sampling [20–22] are computationally efficient and, in addition, eigenvectors, online algorithms, fast subspace tracking. can harness advanced computer architectures. Our Contributions. Through compounding a randomized low-rank 1. INTRODUCTION matrix factorization method and the rank-1 update strategy, we pro- The generalized Hermitian eigenvalue problem (GHEP) [1] is of pose two online algorithms for r-dominant generalized eigenvectors extraction, where r 1: we first present the APR-EVD (alternate- great interest in signal processing, machine learning and data anal- ysis applications. The GHEP algorithms provide powerful tools to projections randomized eigenvalue decomposition) algorithm that treat problems in blind source separation [2,3], feature extraction [4, efficiently solves a standard eigenvalue problem. Then, by harness- 5], noise filtering [6], fault detection [7], antenna array process- ing the rank-1 update scheme, we devise two line algorithms to ex- ing [8], classification [9], and speech enhancement [10]. Traditional tract the generalized eigenvectors with streaming data. Our proposed methods for solving the GHEP include power and inverse iteration algorithms are computationally efficient, as the necessary steps in each iteration need (N 2) operations. Further, they can be paral- based methods, Lanczos method and Jacobi-Davidson method [1, O 11]. These batch methods, however, are inefficient and, in some lelized on modern computers. cases, infeasible to apply due to their computational workload. The Notation. Normal fonts x and X denote scalar. Boldface small online methods presented in [8,9,12] are gradient-based, and extract letters x and capital letters X denote column vectors and matrices, respectively. C denotes the complex domain. The superscript ( )⇤ the first dominant (or principal) generalized eigenvector. However, · denotes the conjugate of a complex number, ( )H denotes the Her- they are unsuitable for applications where multiple dominant gen- · eralized eigenvectors are desired [10]. In addition, these methods mitian transpose operator, and ( )† denotes the pseudo-inverse of a · suffer from the so-called speed-stability problem [13], i.e., it is hard matrix. IN denotes an identity matrix of order N. orthr( ) con- · to select an appropriate learning rate to guarantee tracking speed and structs an orthonormal basis with r columns for the range of a matrix. numerical stability. To address the issue, coupled learning methods were proposed in [14, 15]. These methods, which are considered as 2. PROBLEM FORMULATION sequential methods, in addition to be difficult to parallelize in order to harness modern computational platforms, may even cause error N N Given a matrix pencil (Ry, Rx), where Ry, Rx ⇥ are Her- propagation during the procedure of orthogonal projection. Yang C mitian and positive definite, the GHEP [1, 11] is defined2 as: et al. [16] proposed recursive least-square (RLS)-based online al- gorithms based on the projection approximation subspace tracking Rywi = iRxwi,i=1, ,N (1) (PAST) technique [17]. The work in [18] presented a computation- ··· where w1, , wN are nonzero vectors corresponding to N gener- ally efficient algorithm, but it suffers from slow convergence speed ··· alized eigenvalues 1 > 2 > > N > 0. Provided that Rx ··· This work was supported in part by NSFC grants 61671382 and is invertible, to obtain a generalized eigen-pair (wi, i), in general, 61811530283, and 111 project (B18041). Corresponding author: J. Chen. (1) is reduced to a Hermitian or non-Hermitian eigenvalue problem:
978-9-0827-9705-3 2353 EUSIPCO 2020 1/2 1/2 H Case HEP: Provided that Rx (Rx ) = Rx, the set of eigen- Computational Cost of APR-EVD. To factor A, APR-EVD incurs 1/2 H vectors w is obtained by (Rx ) v, where v is the set of eigen- these costs: generating a random matrix costs (Nd). Forming 2 O 1/2 1/2 H T G and H each costs (N d). Considering an estimation to (7), vectors of Rx Ry(Rx ) determined so that v Rxv = I. O forming T and computing an eigenpair cost (Nr2)+ (r3). Thus, Case Non-HEP: The generalized eigenvectors wi and generalized O O 1 the operation count (dominated by multiplications of A) satisfies eigenvalues i are obtained by solving Rx Rywi = iwi. In many signal and information processing applications, Rx and e 2 APR-EVD = (N d). (9) Ry are associated to data covariance matrices. Let the covariance C O H matrices of x(k) and y(k) be given by Rx = E x(k)x (k) and H { } Here d (the sampling size parameter) is very close to r. Ry = E y(k)y (k) . When processing streaming data, at each instant k {these matrices} are typically estimated by time averaging r with the most recent data [14–16, 18, 19, 23] as follows: 3.2. Online Algorithm for Extracting -dominant Generalized Eigenvectors with Case HEP (Algorithm 1). H Rx(k)=↵Rx(k 1) + x(k)x (k), (2) 1/2 1/2 H Directly recalculating Rx (k)Ry(k)(Rx (k)) has the com- H 3 Ry(k)= Ry(k 1) + y(k)y (k), (3) putation complexity of (N ). We therefore consider the estimated covariance matrices byO rank-1 matrices at each instant, i.e., equa- where parameters ↵ (0, 1) and (0, 1) are smoothing con- tions (2) and (3), together with the proposed APR-EVD algorithm to 2 2 stants. recursively compute the generalized eigenvectors. For ease of nota- 1/2 H Under the above setting, in this paper we devise two online algo- tion, let K(k)=Rx (k) and R(k)=K(k)Ry(k)K (k). rithms by first transforming (1) into a standard eigenvalue problem In order to use APR-EVD with the arrival of x(k) and y(k), we and then apply a randomized EVD algorithm, which enables pro- recursively update R(k), then G(k)=RH(k) (k) and H(k)= cessing streaming data for tracking generalized eigenvactors. R(k)G(k) so that the orthonormal basis at instant k, Q(k), can be extracted. Exploiting the results in [19], we update R(k) and K(k): 3. PROPOSED ALGORITHMS 1 R(k)= R(k 1) + y˜(k)y˜H(k)+x˜(k)cH(k) In this section, we first propose a randomized eigenvalue decompo- ↵ sition algorithm. We then adapt this algorithm to the streaming data + (⇥k)h(k)x˜H(k) , setting with Case HEP and Case Non-HEP respectively. 1 1 H K(k)= K(k 1)⇤ + 1(k)x˜(k)¯x (k). (10) p↵ 3.1. The APR-EVD Algorithm N N where Given a rank-r matrix A C ⇥ , the proposed APR-EVD algo- 2 y˜(k)=K(k 1)y(k), (11) rithm is computed as follows: we generate a random Gaussian matrix N d 1 C ⇥ , where r d