Sublinear-Time Sparse Recovery, and Its Power in the Design of Exact Algorithms
Total Page:16
File Type:pdf, Size:1020Kb
Sublinear-Time Sparse Recovery, and Its Power in the Design of Exact Algorithms The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation Nakos, Vasileios. 2019. Sublinear-Time Sparse Recovery, and Its Power in the Design of Exact Algorithms. Doctoral dissertation, Harvard University, Graduate School of Arts & Sciences. Citable link http://nrs.harvard.edu/urn-3:HUL.InstRepos:42029485 Terms of Use This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Other Posted Material, as set forth at http:// nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of- use#LAA Sublinear-Time Sparse Recovery, and its Power in the Design of Exact Algorithms A dissertation presented by Vasileios Nakos to The School of Engineering and Applied Sciences in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the subject of Computer Science Harvard University Cambridge, Massachusetts May 2019 c 2019 Vasileios Nakos All rights reserved. Dissertation Advisor: Author: Jelani Nelson Vasileios Nakos Sublinear-Time Sparse Recovery, and its Power in the Design of Exact Algorithms Abstract In the sparse recovery problem one wants to reconstruct an approximatelly k-sparse vector x Rn using time and number of measurements that are sublinear, i.e. way less n, ideally 2 nearly linear in k. Depending on the setting, measurements correspond to one of the following: linear combinations of the entries of x, a non-linear function of some linear function of x , Fourier coefficients, the logical OR of entries in x. In this thesis I describe several new contributions to the field of sparse recovery, as well as indicate how sparse recovery techniques can be of great significance in the design of exact algorithms, outside of the scope of the problems they first were created for. Standard Sparse Recovery: • 2 – The state of the art `2/`2 scheme: optimal measurements, O(k log (n/k)) decod- ing time and O(log(n/k)) column sparsity, via a new, non-iterative approach. Non-linear Sparse Recovery: • – The first sublinear-time algorithm for one-bit compressed sensing. – A set of O(k logc n)-time algorithms for compressed sensing from intensity only measurements. The algorithms use O(k log n) measurements, being the first sublinear-time measurement-optimal algorithms for the problem. Sparse Fourier Transform • iii – A nearly sample-optimal algorithm for `•/`2 Sparse Fourier Transform in any dimension. – A nearly optimal sublinear-time deterministic algorithm for `•/`1 Sparse Fourier Transform. Design of Exact Algorithms • – A nearly optimal algorithm for sparse polynomial multiplication. – An almost optimal deterministic algorithm for the Modular-SubsetSum problem, O(plog m log log m) running in time m 2 · . · – A nearly optimal Las Vegas algorithm for the Modular-SubsetSum problem, running in time O(m). – An (almost) output-sensitivee algorithm for the SubsetSum problem. iv Contents Abstract ............................................ iii Acknowledgments ..................................... xiii Introduction 1 0.1 Standard Compressed Sensing/Sparse Recovery ................. 1 0.1.1 Previous work ................................. 2 0.2 Sparse Fourier Transform .............................. 6 0.3 Non-Linear Compressed Sensing .......................... 9 0.3.1 Compressed Sensing from Intensity-Only Measurements ....... 9 0.3.2 One-Bit Compressed Sensing ........................ 11 0.4 Sparse Polynomial Multiplication .......................... 15 0.5 Subset Sum ...................................... 17 1 Standard Compressed Sensing 20 1.1 `2/`2 Compressed Sensing; Without Iterating .................. 20 1.1.1 Our result ................................... 20 1.1.2 Notation .................................... 20 1.1.3 Technical statements ............................. 21 1.1.4 Overview of techniques and difference with previous work ...... 22 1.1.5 Possible applications of our approach to other problems ........ 29 1.2 The Identification Linear Sketch .......................... 31 1.2.1 Design of the sketch and decoding algorithm .............. 32 1.2.2 Concentration of estimation ......................... 34 1.2.3 Analysis of running time .......................... 38 1.2.4 Guarantees of the algorithm ........................ 40 1.2.5 Bounding the number of measurements ................. 42 1.3 The Pruning Linear Sketch .............................. 42 1.3.1 Design of the sketching matrix, and helpful definitions ......... 43 1.3.2 Analyzing head and badly-estimated coordinates ............ 46 1.3.3 Guarantee of algorithm ........................... 49 1.3.4 Time, measurements, column sparsity, and probability ......... 53 v 1.4 Tail Estimation ..................................... 54 1.4.1 Random walks ................................ 54 1.4.2 p-stable distributions ............................. 55 1.4.3 `p-tail estimation algorithm ......................... 56 2 Non-Linear Compressed Sensing 62 2.1 Compressed Sensing from Intensity-Only Measurements ............ 62 2.1.1 Results ..................................... 62 2.1.2 Toolkit ..................................... 65 2.1.3 Noiseless Signals ............................... 68 2.1.4 `•/`2 Algorithm ............................... 71 2.1.5 `2/`2 with low failure probability for real signals ............ 78 2.1.6 Analysis .................................... 83 2.2 One-Bit Compressed Sensing ............................ 96 2.2.1 Our Contribution ............................... 96 2.2.2 Preliminaries and Notation ......................... 97 2.2.3 Main Result .................................. 97 2.2.4 Overview of our Approach ......................... 98 2.2.5 Toolkit ..................................... 99 2.3 Main Algorithm .................................... 99 2.3.1 Reduction to small Sparsity ......................... 100 2.3.2 One-BitPartitionPointQuery ....................... 102 2.3.3 One-Bit b-tree ................................ 107 2.3.4 One-Bit ExpanderSketch ......................... 109 3 Sparse Fourier Transform 111 3.1 (Nearly) Sample Optimal Sparse Fourier Transform in Any Dimension .... 111 3.1.1 Preliminaries ................................. 111 3.1.2 Our result ................................... 112 3.1.3 Summary of previous Filter function based technique ......... 113 3.1.4 RIP property-based algorithms: a quick overview ............ 114 3.1.5 Overview of our technique ......................... 115 3.1.6 Algorithm for d-dimensional Sparse Fourier Transform ........ 124 3.1.7 Notations ................................... 124 3.1.8 Algorithm ................................... 126 3.1.9 Analysis .................................... 130 3.2 Deterministic Sparse Fourier Transform with an `• Guarantee ........ 145 3.2.1 Technical Results ............................... 145 3.2.2 Technical Toolkit ............................... 151 vi 3.2.3 Overview ................................... 153 3.2.4 Linear-Time Algorithm ........................... 155 3.2.5 Sublinear-Time Algorithm .......................... 163 3.2.6 Incoherent Matrices via Subsampling DFT Matrix ............ 166 3.2.7 Incoherent Matrices via the Weil bound .................. 171 3.2.8 Reduction of the `• norm .......................... 172 4 Sparse Recovery and the Design of Exact Algorithms 176 4.1 Nearly Optimal Sparse Polynomial Multiplication ................ 176 4.1.1 Preliminaries ................................. 176 4.1.2 Result and Proof ............................... 177 4.1.3 Discussion ................................... 185 4.1.4 Multivariate Polynomials .......................... 185 4.2 Modular Subset Sum in Almost Linear Time. Deterministically ........ 186 4.2.1 Preliminaries ................................. 186 4.2.2 Results ..................................... 186 4.2.3 Technical Toolkit ............................... 187 4.2.4 Symmetry group and its properties .................... 187 4.2.5 Computing the Sumset of Two Sets .................... 188 4.2.6 Cauchy-Davenport and Kneser’s Theorem ................ 189 4.2.7 Modular Subset Sum over Zp ........................ 190 4.2.8 Modular subset sum over Zm, for any m ................. 191 4.2.9 Output-sensitive Sumset Computation .................. 195 4.2.10 Computing the symmetry group ...................... 202 4.3 Subset Sum in Almost Output Sensitive Time ................... 203 4.3.1 Preliminaries ................................. 203 4.3.2 Our Contribution ............................... 204 4.3.3 Output-sensitive sumset computation in the non-modular case .... 205 4.3.4 A simple algorithm for all-attainable Subset Sums (case t = sS) .... 206 4.3.5 Algorithm for general t ........................... 208 4.4 General Algorithm .................................. 210 5 Conclusion and Open Problems 216 5.1 Non-Linear Compressed Sensing .......................... 216 5.2 Sparse Fourier Transform .............................. 216 5.3 Polynomial Multiplication and Sumset Computation .............. 217 5.4 Subset Sum ...................................... 218 References 219 vii List of Tables 1.1 (A list of `2/`2-sparse recovery results). We ignore the “O” for simplicity. LP denotes the time of solving Linear Programs [CLS19], and the state-of-the- art algorithm takes nw time where w is the exponent of matrix multiplica- tion. The results in [Don06, CRT06, NT09a] do not explicitly