View metadata, citation and similar papers at core.ac.uk brought to you by CORE

provided by Apollo

Sparsity-fused Kalman Filtering for Reconstruction of Dynamic Sparse Signals

Xin Ding∗, Wei Chen†∗, Ian Wassell∗ ∗Computer Laboratory, University of Cambridge, Cambridge, UK †State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, China Email: xd225, wc253, ijw24 @cam.ac.uk

Abstract—This article focuses on the problem of reconstructing it is straightforward to add any sparsity constraints. But it dynamic sparse signals from a series of noisy compressive sensing lacks the updates on the covariance as in conventional KF. measurements using a (KF). This problem arises Using the constrained optimization problem, Carmi proposed in many applications, e.g., Magnetic Resonance Imaging (MRI), Wireless Sensor Networks (WSN) and video reconstruction. a stand-alone KF-based reconstruction approach by employing The conventional KF does not consider the sparsity structure pseudo-measurements [15]. To our best knowledge, all the presented in most practical signals and it is therefore inaccurate existing methods are either enforcing sparsity by estimating when being applied to sparse signal recovery. To deal with this the signal amplitudes and support separately or pursuing spar- issue, we derive a novel KF procedure which takes the sparsity sity by introducing the optimization problem into the original model into consideration. Furthermore, an algorithm, namely Sparsity-fused KF, is proposed based upon it. The method of KF in various ways. The non-sparse estimating framework iterative soft thresholding is utilized to refine our sparsity model. underlying the KF has not been altered. The superiority of our method is demonstrated by synthetic data In this article, we propose a novel KF process which has and the practical data gathered by a WSN. directly fused the signal sparsity model into its filtering process and we then develop an algorithm for the reconstruction of I.INTRODUCTION dynamic sparse signals based on the sparsity-fused KF. We Most practical signals have a sparse structure in a suit- pursue the sparsity of signals via Iterative Soft Thresholding able and the sparse signal model has made dramatic (IST), which has been proved to be effective in estimating improvements on solving linear inverse problems. Based on sparse signals [16]. Our method outperforms the existing this, Compressive Sensing (CS) [1], [2], [3], has emerged algorithms in the following respects: 1) our reconstruction as a new sampling paradigm, which allows sparse signals results for CS acquired dynamic sparse signals are more to be recovered accurately from far fewer measurements as accurate in the practical noisy case; 2) it is more robust required by the Nyquist sampling rate. CS for static sparse when the measurement noise increases; 3) it converges in signal recovery has been heavily investigated, e.g., [4], [5], fewer iterations; 4) it is easier to tune the parameters for our [6], [7]. Compared to the static case, dynamic sparse signals algorithm. have more correlated statistics that can be exploited. Improved The rest of the paper is organized as follows: Section II CS schemes using dynamic correlation features have been provides the mathematical formulation of the problem; section successfully developed in many applications, e.g., MRI [8], III presents the details of our work, including the sparsity Wireless Sensor Networks (WSNs) [9] and video [10]. statistic model, the novel sparsity-fused KF process and our The Kalman Filter (KF) has been one of the most popular proposed algorithm; section IV demonstrates the experimental approaches in the area of linear dynamic system modeling. results for both the synthetic and practical data; section V However, the KF is not an ideal method when it comes to concludes this article. dynamic sparse signals, as the KF in general does not consider any sparsity constraints during the state estimation. In [11], the II.PROBLEM FORMULATION N first KF based CS method was proposed. They employed CS Consider a sequence of signals xt ∈ R (t = 1, ...T ). Each on the KF residual, identified the support set of the signals and of the signals is sparse in some sparsifying basis Ψ ∈ RN×N , ∗ then carried out a second KF corresponding to the obtained i.e., st = Ψ xt, where st has only K (K  N) non-zero set. The signal sparsity is enforced by the support detection, coefficients and Ψ∗ denotes the conjugate transpose of Ψ. rather than by changing the underlying principle of the KF. In We sample this sequence of sparse signals via: [12], they transformed the KF procedure as a MAP estimator y = A s + e , (1) and included the sparse constraint as in CS-type algorithms. t t t t By exploiting the inherent statistics of the KF, the authors of where At = ΦtΨ is the equivalent projection , Φt ∈ M×N M [13] proposed to employ a hierarchical Bayesian network to R is the sensing matrix, yt ∈ R (M  N) is the t-th M capture the sparsity. However they only focused on the sparsity observation vector and et ∈ R is the measurement noise. of the innovations between correlated signals. In [14], they This is an under-determined system and conventionally it is formulated the KF into the optimization framework, to which impossible to recover st from yt. However, under the condition Sparse KF KF Estimation be explained by Fig. 1(a). The green curve that represents the Estimation Measure- Measure- Sparse ment ment distribution of the KF estimation, i.e., p(st|yt,ˆst−1), is derived Estimation by multiplying the red and blue curves, i.e., p(st|ˆst−1) and p(yt|st), respectively. We refer to this process as “fusion”. Prediction Predic- Obviously, in this fusion process, p(st) is approximated as a tion uniform distribution, i.e., the conventional fusion process does not include any probability related to the sparsity of the signal. (a) ( b) In our method, we take into consideration the signal sparsity Fig. 1. Fusion process for (a) the conventional KF; red: prediction distribution constraint, which gives: p(st|ˆst−1); blue: measurements distribution p(yt|st); green: KF estimation p(s |y ˆs ); (b) sparsity-fused KF; red and blue curves represent the same t t, t−1 p(st|yt,ˆst−1) ∝ p(st|ˆst−1)p(yt|st)p(st), (6) distributions as in (a); yellow: distribution of sparse estimation p(st); green: sparsity-fused KF estimation p(st|yt,ˆst−1). where p(st) is the distribution modeling the sparsity of the signal. As illustrated in Fig. 1(b), we derive our sparsity-fused that st is sparse, if At satisfies the Restricted Isometry KF estimation (the green curve) by fusing the distributions of Property (RIP) [17], CS offers a way to solve the problem the prediction (the red curve), measurements (the blue curve) with an overwhelming probability of success. Conventional CS and the sparse estimation (the yellow curve). reconstructs s by solving the following optimization problem: t A. Sparsity Distribution Model 2 min ||st||1 s.t. ||yt − Atst||2 ≤ , (2) st We model the sparse signal distribution by: PN p 1/p where ||v||p = ( v ) is the lp-norm of v and  is a j=1 j p(st) ∼ N (0, Tt), Tt = diag(λ) = diag([λ1, ..., λN ]), (7) tolerance parameter. So reconstructing dynamic sparse signals one by one using the conventional CS as in (2) does not exploit where the vector λ denotes the variance of the signal. As any correlations between the signals. noticed, the sparsity is transferred from the amplitude of the T We model our system as a linear dynamic system which signal to the variance of it. If we denote st = [s1, s2, ..., sN ] , follows: the corresponding signal amplitude at each index is

st = Ftst−1 + qt, (3) s2 1 − i s = arg max√ e 2λi , (8) where Ft denotes the state transition matrix from the (t − 1)- i λ 2πλi th signal to the t-th signal and q represents the process i t where i = 1, ..., N. Thus, we can derive that the relationship noise. For the system defined by (1) and (3), if we assume between the signal amplitudes and the variance in our model the statistics of s are known as p(s ) ∼ N (ˆs , P ) t−1 t−1 t−1 t−1 is: and we further assume the following Gaussian distributions: 2 si = λi. (9) p(et) ∼ N (0, Rt), p(qt) ∼ N (0, Qt), p(st|ˆst−1) ∼ N (Ftˆst−1, Pt|t−1) and p(yt|st) = N (Atst, Rt), it becomes This relationship paves the way to apply any conventional a standard KF setting. Under these assumptions, the KF pro- sparsity pursuing method that functions on signal amplitudes, 2 rather than the variance, into our system. vides a solution that both minimizes the MSE, E[||st −ˆst||2], and maximizes a posterior probability, i.e., B. Sparsity-fused KF update steps ˆst = arg max p(st|yt,ˆst−1). (4) st Because of the difference between (5) and (6), the original See [18] for the original KF equations. The process for the KF update steps are no longer applicable to our model. We conventional KF includes two stages: predicting the current derive the following novel KF updating equations based on signal using the previous one and updating the estimate using (6): −1 the measurements. It utilizes the correlations between dynamic Mt = Pt(Pt + Tt) , (10) signals, but the solution ˆst is generally not sparse. This is ˜st = (I − Mt)ˆst, (11) highly inaccurate when being applied to the recovery of sparse P˜ t = (I − Mt)Pt, (12) signals. We therefore propose to design a new KF process that does not ignore the sparse structure of the signals. where Pt and ˆst are, respectively, the covariance matrix and the state estimate in the original KF, Tt is the variance III.SPARSITY-FUSED KALMAN FILTERING matrix for the sparsity model as defined in (7) and I is an From a statistical view, the original KF essentially approx- identity matrix. The resulting Mt, ˜st, P˜ t are the Sparsity- imates the probability in (4) as: fused KF gain, the sparse state estimate and the corresponding p(st|yt,ˆst−1) ∝ p(st|ˆst−1)p(st|yt) covariance matrix, respectively. Derivation: Define the green Gaussian function in Fig. = p(st|ˆst−1)[p(yt|st)p(st)/p(yt)] p 2 2 2 1(a) as y1(s; µ1, σ1) = (1/ 2πσ1)exp[−(s − µ1) /2σ1] and ∝ p(st|ˆst−1)p(yt|st), (5) the yellow Gaussian function in Fig. 1(b) as y2(s; 0, σ2) = p 2 2 2 where p(st|ˆst−1) denotes the predict step and p(yt|st) repre- (1/ 2πσ2)exp(−s /2σ1). Note that the process of deriving sents the update step. Accordingly, the operation of the KF can the green Gaussian distribution in Fig. 1(b) is equivalent to Algorithm 1 Sparsity-fused KF according to the result of thresholding and the relationship 2 Input: yt(t = 1, .., T ), A, Ft, Q, R = σ0I, sˆ0, P0, γ, N, Nτ . we obtained in (9), the variance matrix T of the sparsity Output: ˆst(t = 1, ..., T ). distribution can be derived. It is then utilized by the sparsity- 1: for t = 1 to T do fused KF procedure to obtain the reconstruction result for 2: Run the original KF: this iteration. The steps of the thresholding and the sparsity- ∗ fused KF are repeated to refine the reconstruction iteratively. ˆst|t−1 = Ftˆst−1, Pt|t−1 = FtPt−1Ft + Q, (14) ∗ ∗ −1 Kt = Pt|t−1A (APt|t−1A + R) , (15) Serially, the same process is carried out for the next signal in ˆst = ˆst|t−1 + Kt(yt − Aˆst|t−1), (16) the sequence. Pt = Pt|t−1 − KtAPt|t−1, (17) In the area of static sparse signal recovery, iterative thresh- 0 0 olding schemes have been popular for some years [19]. Such 3: Initialization: st = ˆst, P = Pt. 4: for τ = 1, 2, ..., Nτ do schemes are attractive because they have very low compu- 5: Thresholding: tational complexity and low storage requirements. However, p στ = (Pτ−1(1, 1) + ... + Pτ−1(N,N))/N, (18) there are two issues concerning the implementation of such methods. Firstly, it is essential to choose the thresholds very rτ−1 = y − Asτ−1, (19) t t t carefully and they need to vary from iteration to iteration. ˇsτ = ητ (A∗rτ−1 + sτ−1; γστ ). (20) t t t In [20], they carried out extensive experiments and finally 6: Tτ = diag([ˇsτ (1)2,ˇsτ (2)2, ...,ˇsτ (N)2]). Derive T: t t t t proposed how to optimally tune the parameters. Secondly, 7: Run sparsity-fused KF: τ τ −1 these methods behave poorly in the presence of noise. Mt = Pt(Pt + Tt ) , (21) τ τ Fortunately, these issues are not problematic in the context st = (I − Mt )ˆst, (22) τ τ of our algorithm. Specifically, for the soft threshold function P = (I − Mt )Pt. (23) 8: end for η that we employed as in [6], the thresholds σ is defined as τ τ (στ )2 = E[||sτ − s||]2 9: Update: ˆst = s , Pt = P . 2, which is the estimate of the current t τ 2 10: end for MSE. In conventional IST, an empirical estimate of (σ ) is used as the true MSE is unknown. But in our case, it is much simpler as the KF process happened to provide us such an estimate in the covariance matrix P. Therefore, we define στ fusing y1 and y2. Follow the scalar derivation process in [18], we can get the expression for the fused function as: as in (18). Regarding the second issue, we benefit from the (s−µ )2 correlations between the signals. Our reconstruction depends − fused 1 2σ2 yfused(s; µfused, σfused) = e fused , on both the previous signal and the current measurements, q 2 2πσfused rather than purely rely on the noisy measurements as in the (13) conventional IST. We will show by simulation that compare 2 4 σ1 µ1 2 2 σ1 to the existing algorithms, our algorithm is more robust when where µfused = µ1 − (σ2+σ2) and σfused = σ1 − (σ2+σ2) . 1 2 1 2 the noise presents in Section IV. Substitute the scalars with the KF terms as follows: µfused → 2 ˜ 2 2 ˜st, µ1 → ˆst, σfused → Pt, σ1 → Pt, σ2 → Tt, M = D. Heuristics for the Proposed Algorithm σ2/(σ2 + σ2) → M , the sparsity-fused KF update equations 1 1 2 t We shall start with the heuristics for the conventional (10)-(12) are derived.  IST algorithm in the static case [6]. Define the operator ∗ C. Proposed Algorithm for the Recovery of Dynamic Sparse H = A A − I. In the first iteration of the IST, an estimate of Signals via IST the true signal s is derived as: 1 ∗ Based on the sparse signal model and the newly-derived s = η(A y) = η(s + Hs), (25) KF process, we propose our Sparsity-fused KF algorithm, as where Hs can be regarded as a noisy random vector with i.i.d. presented in Algorithm 1. The algorithm proceeds as follows. −1 2 Gaussian entries with variance N ||s||2. It is well-understood For each of the signals in the sequence, an original KF that the soft thresholding results in s1 being closer to s than is performed at first, based on which the thresholding and A∗y [6]. Then in the next iteration, the estimate is: the sparsity-fused KF steps are initialized. Then we employ s2 = η[A∗(y − As1) + s1] = η[s + H(s − s1)], (26) soft thresholding (with a threshold function η) to pursue the sparsity of the signal, in which the threshold σ is determined where the noisy vector is again a Gaussian vector with variance −1 1 2 by the KF covariance matrix P. The threshold function is N ||s − s ||2. We can see that the noise level is lower than the last iteration, so the thresholding result can be anticipated defined as:  u − γστ if u ≥ γστ , to be more accurate at this iteration, and so on.  ητ (u; γστ ) = u + γστ if u ≤ −γστ , (24) Now consider the proposed algorithm. In the first iteration  0 elsewhere, τ = 1 at t, we get an estimate as: 1 ∗ 0 0 0 where τ is the index for the τ-th iteration, γ is a control ˇst = η[A (yt − Ast ) + st ] = η[st + H(st − st )], (27) −1 0 2 parameter, u is a scalar element and the the function η where the noise vector has a variance of N ||st − st ||2 and 0 is applied component-wise when applied to a vector. Then, st is the initialization by the conventional KF. Therefore, if the Sparsity−fused KF Sparsity−fused KF 20 5000

0 0 0 0.01 0.02 0.03 0.04 0.05 CSKF

−20 5000

0 50 100 150 200 250 0 0 0.01 0.02 0.03 0.04 0.05 Conventional KF CS 5000 Amplitude 5 0 0 0.01 0.02 0.03 0.04 0.05 0 IST 5000 −5 0 0 50 100 150 200 250 0 0.01 0.02 0.03 0.04 0.05 index MSE Fig. 2. An example of reconstructed signal using the Sparsity-fused KF Fig. 3. Histogram of the MSE for all 13000 reconstructions for various algorithm and the conventional KF. The red circles denote the amplitudes of algorithms. (T = 130,M = 72,N = 256, σ0 = 0.2.) the original signals for all its non-zero coefficients. (t = 100,M = 72,N = 256, σ0 = 0.2.) 1.6 IST AMP original KF step produces an acceptable result so that ||st − 1.4 0 2 2 1 CSKF st ||2 < ||st||2, we can anticipate that our estimate ˇst is closer 1.2 Sparsity−fused KF to st than the one in (25). Then in this case, by fusing it with the distribution, which contains the information of both the 1 previous signal and the measurements, that we obtained in the 0.8

1 MSE original KF step, the fused estimation st is even closer to st 1 0.6 than ˇst . In the next iteration, the estimate is: 0.4 2 ∗ 1 1 1 ˇst = η[A (yt − Ast ) + st ] = η[st + H(st − st )], (28) 0.2 −1 1 2 where the noise vector has a variance of N ||st − st ||2. As 0 shown previously, when our estimation in the first iteration, 2 4 6 8 10 12 14 no. of iteration i.e., s1, is closer to s than in the conventional IST, i.e., s1 t t Fig. 4. The track of the MSE at different number of iterations for various in (25), the noise level in (28) is lower than that in (26). algorithms. (T = 130,M = 72,N = 256, σ0 = 0.2.) In consequence, we can anticipate that our estimate in each iteration is more accurate than that of the original IST and as a diagonal matrix with entries equal to 1 on its diagonal. so our algorithm will converge in fewer iterations than the The transition matrix Ft is taken as an identity matrix for t = conventional IST. This will be shown by simulation in Section 1, ..., T . For all the simulations, we employ the i.i.d. Gaussian IV. Besides, a better recovery of st will help to create a more 2 matrix as the sensing matrix A and R = σ0I with various accurate s0 , which is crucial to the reconstruction of the t+1 values of σ0 as the measurement noise covariance matrix. (t + 1)-th signal. In the first experiment, we take 72 samples at each t and IV. EXPERIMENTAL RESULTS try to recover the signals when σ0 = 0.2. For initialization, The performance of the Sparsity-fused KF algorithm is we set ˆs0 = 0 and P0 as a diagonal matrix whose entries tested using several examples in which dynamic sparse signals are all 9 [11]. Let γ = 2 [21] and the number of iteration are recovered from noisy observations. We first test it using Nτ = 100. Fig. 2 shows an example of the signal reconstructed synthetic signals and then employ it to reconstruct practical by our algorithm and the conventional KF method. We can see signals obtained in a WSN. that our method manages to recover the signal with a sparse structure; while the conventional KF fails to do so as it does A. Experiments on Synthetic Signals not use any information about the sparsity. We generate a sequence of sparse signals (T = 130) which Under the same parameter settings, we then compare our behave as a random walk process as following: algorithm with the CS-embedded KF method for dynamic ( signals in [15] (denoted as CSKF), as well as the conven- st(i) + qt(i), if, st(i) ∈ supp(st) 2 st+1(i) = (29) tional static methods, i.e., CS and IST. We set R = 200 , 0, otherwise,  Nτ = 100 for the CSKF algorithm and the IST algorithm is 256 where i is the index, st ∈ R is assumed to be a sparse vector optimally tuned [20]. 100 trials for each algorithm are carried and its support consists of 8 elements of which the index is out. Fig. 3 illustrates the distributions of the MSE for all uniformly sampled in the interval [1, 256]. The process noise the T × 100 = 13000 reconstructions. It is clear that the obeys qt ∼ N (0, Q) and the covariance matrix Q is taken performance of our algorithm surpasses the others. Most of 0.4 Original Signal 30 CS 25 IST 0.35 20 CSKF 15 0.3 Sparsity−fused KF 0 200 400 600 800 1000 Sparsity−fused KF Reconstructed Signal (NSE=9.7e−4) 30 0.25 25 20 0.2 15 MSE 0 200 400 600 800 1000 0.15 CSKF Reconstructed Signal (NSE=2.2e−3) 30 25 0.1 20 15 0.05 0 200 400 600 800 1000 Temperature(C) CS Reconstructed Signal (NSE=2.3e−3) 30 0 30 40 50 60 70 80 25 no. of measurements 20 15 Fig. 5. The MSE for reconstructions with different number of measurements 0 200 400 600 800 1000 2 IST Reconstructed Signal (NSE=2.8e−3) for various algorithms. (T = 130, σ0 M = 3,N = 256.) 30 25 0.12 20 CS 15 IST 0 200 400 600 800 1000 Time/10 mins 0.1 CSKF Sparsity−fused KF Fig. 7. An example of the reconstructed temperature signals using various algorithms. (17th SN, M = 150,N = 1024, σ = 0.2.) 0.08 0

0.01 0.06 CS

MSE 0.009 IST CSKF 0.008 0.04 Sparsity−fused KF 0.007

0.02 0.006

0.005 0 NSE 0 0.1 0.2 0.3 0.4 0.5 0.004 σ 0 0.003 Fig. 6. The MSE for reconstructions under different amount of noise for various algorithms. (T = 130,M = 72,N = 256.) 0.002 0.001

0 our reconstructions have an MSE of approximate 0.006; while 100 150 200 250 300 the mean MSE for the other algorithms are all above 0.014. no. of measurements Note that the other dynamic algorithm, CSKF, is only Fig. 8. The NSE for reconstructions with different number of measurements (T = 20, σ2M = 6,N = 1024.) slightly better than the static algorithms. This is because for various algorithms. 0 the CSKF algorithm is highly dependent on the number and In the experiment that yielded the results in Fig. 5, to keep the quality of the measurements. In other words, when just a small energy of the measurement noise constant, we set σ2M = 3. It fraction of measurements are taken and/or they are too noisy, 0 is clear that our algorithm is superior to the others, especially the benefit of the dynamic correlations in the CSKF algorithm when the number of measurement is low. Note that the IST is submerged by the error induced by the measurements. algorithm fails when less than 30 measurements are taken. To Fig. 4 demonstrates the track of the MSEs at different obtain Fig. 6, we set M = 72 and the rest of the settings are numbers of iterations for various algorithms. Our algorithm is the same as before. We can observe that our algorithm is also compared with the iterative algorithms for static reconstruction more robust than the others when the noise increases. (IST, AMP [6]) and for dynamic reconstruction (CSKF). Each point in the figure is the average of 130 signals over 100 trials. B. Experiments on Practical Signals For clarity, we only show the first 15 iterations. We can see We now employ the proposed algorithm to reconstruct the that the dynamic algorithms converge in fewer iterations than signals gathered by the WSN located in the Intel Berkeley the static ones. Our algorithm requires the fewest number of Research lab [22]. In this WSN, 54 Sensor Nodes (SNs) were iterations for convergence and its MSE is always the lowest deployed at various positions in one floor of the lab building, for different number of iterations. and each of them has gathered humidity, temperature and light Then we test the performance of our algorithm with differ- level data for more than one month. ent numbers of measurements and under different amounts of We extract the temperature data during a continuous time measurement noise. The results are shown in Fig. 5 and Fig. period collected by 20 SNs as the dynamic sparse signals, i.e., 6. All the results are the average of 130 signals over 100 trials. T = 20. As demonstrated in [9], these independently collected signals are highly correlated. Meanwhile, these signals are [2] E. Candes, E.s and M. Wakin, “An introduction to compressive sam- highly sparse in the Discrete Cosine Transform (DCT) domain. pling,” Magazine, IEEE, vol. 25, no. 2, pp. 21 –30, march 2008. Therefore, we utilize the DCT matrix as the sparsifying matrix [3] D. Donoho, “,” Information Theory, IEEE Transac- Ψ and the sensing matrix Φ ∈ RM×N is set as an i.i.d. Gaus- tions on, vol. 52, no. 4, pp. 1289 –1306, april 2006. sian random matrix. Due to the high correlation presented, we [4] J. Tropp and A. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” Information Theory, IEEE Transac- let the transition matrix F be an identity matrix and use the tions on, vol. 53, no. 12, pp. 4655 –4666, dec. 2007. method in [23] to learn the process noise covariance matrix Q. [5] D. Needell and J. Tropp, “Cosamp: Iterative signal recovery from We use the conventional CS reconstruction result to initialize incomplete and inaccurate samples,” Applied and Computational Har- monic Analysis, vol. 26, no. 3, pp. 301 – 321, 2009. [Online]. Available: ˆs0 and P0 = 100Q [11]. http://www.sciencedirect.com/science/article/pii/S1063520308000638 Let M = 150,N = 1024, σ0 = 0.2 and the rest of the [6] D. L. Donoho, A. Maleki, and A. Montanari, “Message-passing parameters are set as in section IV-A. Fig.7 demonstrates the algorithms for compressed sensing,” Proceedings of the National Academy of Sciences, vol. 106, no. 45, pp. 18 914–18 919, 2009. reconstructed temperature signals for the 17th SN using dif- [Online]. Available: http://www.pnas.org/content/106/45/18914.abstract ferent algorithms. It is observable that our algorithm produces [7] M. Duarte and Y. Eldar, “Structured compressed sensing: From theory to better reconstruction results in term of the Normalized Squared applications,” Signal Processing, IEEE Transactions on, vol. 59, no. 9, 2 2 pp. 4053 –4085, sept. 2011. Error (NSE), which is defined as ||ˆst − st||2/||s2||2. [8] H. Jung, K. Sung, K. S. Nayak, E. Y. Kim, and J. C. Ye, “K-t focuss: We then vary the number of the measurements to see the a general compressed sensing framework for high resolution dynamic performance of our algorithm, as illustrated in Fig. 8. For each mri,” Magnetic Resonance in Medicine, vol. 35, pp. 2313–2351, 2007. [9] W. Chen, M. Rodrigues, and I. Wassell, “A frechet mean approach particular number of measurements, we carried out 100 trials for compressive sensing date acquisition and reconstruction in wireless and took the average of them to calculate the NSE over all sensor networks,” Wireless Communications, IEEE Transactions on, 20 SNs. For clarity, the NSE when M < 100 is not shown. vol. 11, no. 10, pp. 3598–3606, October 2012. [10] T. Do, Y. Chen, D. Nguyen, N. Nguyen, L. Gan, and T. Tran, “Dis- But we note that when M = 50, both the CS and CSKF tributed compressed video sensing,” in Image Processing (ICIP), 2009 algorithms fail, i.e., the NSE values are in excess of 0.01; while 16th IEEE International Conference on, 2009, pp. 1393–1396. that of the Sparsity-fused KF is 0.0073. We can see that our [11] N. Vaswani, “Kalman filtered compressed sensing,” in Image Processing, 2008. ICIP 2008. 15th IEEE International Conference on, 2008, pp. algorithm preserves its reconstruction performance advantage 893–896. with fewer measurements when evaluated with practical data. [12] W. Dai, D. Sejdinovic, and O. Milenkovic, “Gaussian dynamic With fewer measurements needed, our method can reduce the compressive sensing,” presented at the Int. Conf. Sampling Theory Appl.(SampTA), May. 2011. energy consumed in both sampling and transmission, which is [13] E. Karseras, K. Leung, and W. Dai, “Tracking dynamic sparse signals one of the most significant concerns in a practical WSN. using hierarchical bayesian kalman filters,” in Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, V. CONCLUSIONS May 2013, pp. 6546–6550. [14] A. Charles, M. Asif, J. Romberg, and C. Rozell, “Sparsity penalties In this article, we have derived a novel KF procedure, which in dynamical system estimation,” in Information Sciences and Systems has fused the sparsity distribution model into its underlying (CISS), 2011 45th Annual Conference on, March 2011, pp. 1–6. [15] A. Carmi, P. Gurfil, and D. Kanevsky, “Methods for sparse signal operational principle. Based on this, a Sparsity-fused KF recovery using kalman filtering with embedded pseudo-measurement algorithm for reconstructing dynamic sparse signals from a norms and quasi-norms,” Signal Processing, IEEE Transactions on, series of noisy CS measurements has been proposed. In the vol. 58, no. 4, pp. 2405–2409, April 2010. [16] D. Donoho and I. Johnstone, “Minimax risk overl p -balls forl p -error,” algorithm, we employ the IST method to refine the sparsity Probability Theory and Related Fields, vol. 99, no. 2, pp. 277–303, model iteratively, which is then utilized in the sparsity-fused 1994. [Online]. Available: http://dx.doi.org/10.1007/BF01199026 KF step. Our algorithm has solved the problem of the con- [17] E. J. Candes, “The restricted isometry property and its implications for compressed sensing,” Comptes Rendus Mathematique, vol. ventional KF, i.e., it does not consider the sparsity of signals 346, no. 9 - 10, pp. 589 – 592, 2008. [Online]. Available: and cannot be applied to sparse signal recovery. Experimental http://www.sciencedirect.com/science/article/pii/S1631073X08000964 results using synthetic signals have shown that our algorithm [18] R. Faragher, “Understanding the basis of the kalman filter via a simple and intuitive derivation [lecture notes],” Signal Processing Magazine, outperforms the static reconstruction algorithms (CS, IST) IEEE, vol. 29, no. 5, pp. 128–132, 2012. and the dynamic algorithm (CSKF) when noise exists. It is [19] J. Tropp and S. Wright, “Computational methods for sparse solution of also shown that compared to these algorithms, our algorithm linear inverse problems,” Proceedings of the IEEE, vol. 98, no. 6, pp. 948–958, June 2010. converges in fewer iterations, has better performance when the [20] A. Maleki and D. Donoho, “Optimally tuned al- number of measurements is lower and it is more robust when gorithms for compressed sensing,” Selected Topics in Signal Processing, the noise level increases. Using practical temperature data IEEE Journal of, vol. 4, no. 2, pp. 330–341, April 2010. [21] D. Donoho, Y. Tsaig, I. Drori, and J.-L. Starck, “Sparse solution of gathered by the WSN in the Berkeley Research lab, we have underdetermined systems of linear equations by stagewise orthogonal demonstrated that our algorithm can produce visibly better matching pursuit,” Information Theory, IEEE Transactions on, vol. 58, reconstruction than the aforementioned algorithms and it can no. 2, pp. 1094–1121, Feb 2012. [22] P. Bodik, W. Hong, C. Guestrin, S. Madden, M. Paskin, and R. Thibaux, help to reduce the energy consumption in a practical WSN. “Intel lab data.” Online., p. http://db.csail.mit.edu/labdata/labdata.html, 2004. REFERENCES [23] C. Qiu, W. Lu, and N. Vaswani, “Real-time dynamic mr image re- [1] E. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles: exact construction using kalman filtered compressed sensing,” in Acoustics, signal reconstruction from highly incomplete frequency information,” Speech and Signal Processing, 2009. ICASSP 2009. IEEE International Information Theory, IEEE Transactions on, vol. 52, no. 2, pp. 489 – Conference on. IEEE, 2009, pp. 393–396. 509, feb. 2006.