Sparse Fast Walsh–Hadamard Transform
Total Page:16
File Type:pdf, Size:1020Kb
1 Robustifying the Sparse Walsh-Hadamard Transform without Increasing the Sample Complexity of O(K log N) Xiao Li, Joseph Kurata Bradley, Sameer Pawar and Kannan Ramchandran Dept. of Electrical Engineering and Computer Sciences, U.C. Berkeley. Abstract—The problem of computing a K-sparse N-point guarantees. We prove that our algorithm can recover the Walsh-Hadamard Transforms (WHTs) from noisy time domain sparse WHT at constant signal-to-noise ratios (SNRs) with α samples is considered, where K = O(N ) scales sub-linearly in the same (K log N) samples as for the noiseless case in N for some α 2 (0; 1). A robust algorithm is proposed to recover O the sparse WHT coefficients in a stable manner which is robust [14]. This result contrasts with the DFT work in [15], for to additive Gaussian noise. In particular, it is shown that the K- which robustness to noise increases the sample complexity sparse WHT of the signal can be reconstructed from noisy time from (K) to (K log N). O O domain samples with any error probability N which vanishes The rest of the paper is organized as follows: In Section to zero, using the same sample complexity O(K log N) as in the II, we provide the problem formulation along with the signal noiseless case. and the noise model. Section III provides our main results and a brief comparison with related literature. In Section IV, I. INTRODUCTION we explain the proposed front-end architecture for acquiring The Walsh-Hadamard Transform (WHT) has been widely samples and the robust algorithm using a simple example. In deployed in image compression [1], spreading code design Section V, we provide simulation results which empirically in multiuser systems such as CDMA and GPS [2], and validate the performance of our algorithm. compressive sensing [3]. The WHT may be computed using N Notations: Throughout this paper, the set of integers samples and N log N operations via a recursive algorithm [4], 0; 1; ;N 1 for some integer N is denoted by [N]. f ··· − g [5] analogous to the Fast Fourier Transform (FFT). However, Lowercase letters, such as x, are used for the time domain these costs can be significantly reduced if the signal is sparse expressions and the capital letters, such as X, are used for the in the WHT domain, as is true in many real world scenarios transform domain signal. Any letter with a bar such as x¯ or [6], [7]. X¯ represents a vector containing the corresponding samples. Since the WHT is a special case of the multidimensional Given a real-valued vector v¯ RN with N = 2n, the i-th n 2 DFT over a finite field F2 , recent advances in computing K- entry of v¯ is interchangeably represented by v[i] indexed by sparse N-point Fourier transforms have provided insights in the decimal representation of i or vi0;i1;··· ;in−1 indexed by the designing algorithms for computing sparse WHTs. There has binary representation of i, where i ; i ; ; i − denotes the 0 1 ··· n 1 been much recent work in computing a sparse Discrete Fourier binary expansion of i with i0 and in−1 being the least signifi- Transform (DFT) [8]–[13]. Among these works, the Fast cant bit (LSB) and the most significant bit (MSB), respectively. Fourier Aliasing-based Sparse Transform (FFAST) algorithm The notation F2 refers to the finite field consisting of 0; 1 , f g proposed in [13] uses (K) samples and (K log K) oper- with defined operations such as summation and multiplication O O n ations for any sparsity regime K = (N α) with α (0; 1) modulo 2. Furthermore, we let F be the n-dimensional vector O 2 2 under a uniform sparsity distribution. Following the sparse- with each element from F2 and the addition of the vectors graph decoding design in [13] for DFTs, the Sparse Fast done element-wise over this field. The inner product of two Pn−1 Hadamard Transform (SparseFHT) algorithm developed in binary indices i and j is defined by i; j = t=0 itjt with α h i [14] computes a K-sparse N-point WHT with K = (N ) arithmetic over F2, and the inner product between two vectors O PN using (K log N) samples. Since K is sub-linear in N, their is defined as x;¯ y¯ = x[t]y[t] with arithmetic over R. h i t=1 resultsO can be interpreted as achieving a sample complexity II. SIGNAL MODEL AND PROBLEM FORMULATION (K log N). However, the algorithm specifically exploits the O N n noiseless nature of the underlying signals and hence fails to Consider a signal x¯ R containing N = 2 samples xm 2 n work in the presence of noise. indexed with elements m F2 , and the corresponding WHT ¯ N 2 n In this paper, we consider the problem of computing a K- X R containing N coefficients Xk with k F2 . The 2 ¯ 2 sparse N-point WHT in the presence of noise. A key question N-dimensional WHT X of the signal x¯ is given by of theoretical and practical interest is: what price must be 1 X X = ( 1)hk;mix ; (1) paid to be robust to noise? Surprisingly, there is no cost k p − m N 2 n in sample complexity to being robust to noise, other than a m F2 n constant factor. Specifically, we develop a robust algorithm where k F2 denotes the corresponding index in the trans- which uses (K log N) samples and has strong performance form domain.2 We assume the WHT is a sub-linearly sparse O 2 α signal with K = N non-zero coefficients Xk in the set k Since both the single-ton test in [13] and the collision detection and α (0; 1). 2 K in [14] specifically exploit the noiseless nature of signals, they Previous2 analysis [14] assumes exact measurements of the cannot be used in the noisy setting without major algorithmic time-domain signal x¯. We generalize this setting by using changes. Our work fills this gap by developing a sparse WHT noise-corrupted measurements: algorithm which is robust to noise. y = x + w ; (2) m m m B. Our Results where w (0; σ2) is Gaussian noise added to the clean m ∼ N We now summarize our main results on recovering a K- samples xm. The SparseFHT algorithm [14] no longer works sparse N-point WHT of a signal from noisy time domain in the presence of noise. Therefore, the focus of this paper is samples. For our analysis, we make the following assumptions: to develop a robust algorithm which can compute the sparse • The support of the non-zero WHT coefficients is uni- WHT coefficients X reliably from the noisy samples k k2K formly random in the set [N]. y with the same samplef g complexity as in the noiseless case. m • The unknown WHT coefficients take values from ρ. ρ2 ± • The signal-to-noise ratio SNR = 2 is fixed. III. RELATED WORK AND OUR RESULTS Nσ The first assumption is critical to analyzing the peeling de- In this section, we first frame our results in the context coder. The next two assumptions merely simplify analysis. of previous work on recovering sparse transforms. We then summarize our main results. Theorem 1. For any sublinear sparsity regime K = (N α) for α (0; 1), our robust algorithm based on the randomizedO 2 A. Related Work hashing front-end (Section IV-A) and the associated peeling- based decoder (Section IV-B) can stably compute the WHT X¯ Due to the similarities between the Discrete Fourier Trans- 2 of any signal x¯ in the presence of noise w (0; σ IN×N ), form (DFT) and the WHT, we give a brief account of previous with the following properties: ∼ N work on reducing the sample and computational complexity of 1) Sample complexity: The algorithm needs (K log N) computing a K-sparse N-point DFT. [8], [9] developed ran- noisy samples y . O domized sub-linear time algorithms that achieve near-optimal m 2) Computational complexity: The algorithm requires sample and computational complexities of (K log N) with (N log2 N) operations. potentially large big-Oh constants [11]. Then,O [10] further 3) ProbabilityO of success: The algorithm successfully com- improved the algorithm for 2-D Discrete Fourier Transforms putes the K-sparse WHT X¯ with probability at least (DFTs) with K = pN, which reduces the sample complexity 1 for any > 0. to (K) and the computational complexity to (K log K), − N N albeitO with a constant failure probability that doesO not vanish as Proof. See Appendix A. the signal dimension N grows. On this front, the deterministic Importantly, the proposed robust algorithm can compute the algorithm in [12] is shown to guarantee zero errors but with sparse WHT using (K log N) samples, i.e., no more than the complexities of (poly(K; log N)). SparseFHT algorithmO [14] developed for the noiseless case. A major improvementO in terms of both complexities is given The overhead in moving from the noiseless to the noisy regime by the FFAST algorithm [13], which achieves a vanishing is only in the extra computational complexity. failure probability using only (K) samples and (K log K) operations for any sparsity regimeO K = (N αO) and α IV. STABLE FAST WALSH-HADAMARD TRANSFORM VIA (0; 1). The success of the FFAST algorithmO is thanks to2 ROBUST SPARSE GRAPH DECODING peeling-based decoding over sparse graphs, which depends on the single-ton test to pinpoint the “parity” Fourier bin We now describe our randomized hashing front-end archi- containing only one “erasure event” (unknown non-zero DFT tecture and the associated peeling-based decoding algorithm coefficient).