DISCRETE SAMPLING: DISCRETE GENERALIZATIONS OF THE NYQUIST-SHANNON SAMPLING THEOREM
A DISSERTATION SUBMITTED TO THE DEPARTMENT OF ELECTRICAL ENGINEERING AND THE COMMITTEE ON GRADUATE STUDIES OF STANFORD UNIVERSITY IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY
William Wu
June 2010
© 2010 by William David Wu. All Rights Reserved. Re-distributed by Stanford University under license with the author.
This work is licensed under a Creative Commons Attribution- Noncommercial 3.0 United States License. http://creativecommons.org/licenses/by-nc/3.0/us/
This dissertation is online at: http://purl.stanford.edu/rj968hd9244
ii I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy.
Brad Osgood, Primary Adviser
I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy.
Thomas Cover
I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy.
John Gill, III
Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice Provost Graduate Education
This signature page was generated electronically upon submission of this dissertation in electronic format. An original signed hard copy of the signature page is on file in University Archives.
iii Abstract
80, 0< 11
17 æ 80, 1< 5
81, 0< 81, 1<
12 83, 1< 84, 0< 82, 1< 83, 4< 82, 3< 15 7 6
83, 2< 83, 3< 84, 8< 83, 6< 83, 7< 13 3 æ æ æ æ æ æ æ æ æ æ 9 -5 -4 -3 -2 -1 1 2 3 4 5 84, 4< 84, 13< 84, 14< 1
HIS DISSERTATION lays a foundation for the theory of reconstructing signals from T finite-dimensional vector spaces using a minimal number of samples. We call this theory Discrete Sampling. Suppose the signals we wish to reconstruct are drawn from a known k-dimensional subspace of Cn, denoted by Y. Then the two problems that Discrete Sampling seeks to address are:
n 1. Interpolating System Problem: Find an index set 2(k) and basis vectors := I ∈ U u : i such that { i ∈ I} f Y : f = f[i]u . ∀ ∈ i i X∈I A pair ( , ) satisfying this equation is called an interpolating system for Y. I U
2. Orthogonal Interpolating System Problem: Find an interpolating system for Y in
which the ui’s are orthogonal.
iv In the first problem, the pairing of a sampling sequence and a interpolating basis I that can reconstruct all signals in Y is dubbed an interpolating system. These are the U basic objects of interest that we aim to study. The second problem asks for a more refined version of this object, in which the interpolating basis is orthogonal. The Nyquist-Shannon sampling theorem provides an example of an orthogonal interpolating system. There, the
1 1 1 vector space is the Paley-Wiener space Y = − [ , ], the sampling sequence is = Z, F − 2 2 I and the orthogonal basis vectors are = sinc(t i) : i Z . U { − ∈ } In the first half of this dissertation, we provide a simple theoretical framework for iden- tifying and constructing interpolating systems (IS) and orthogonal interpolating systems (OIS) in general finite dimensional vector spaces. This can be thought of as a generaliza- tion of Shannon’s sampling theorem in the finite dimensional setting. Which vector spaces will admit these systems? What algebraic properties must be satisfied? While it can be proven that every vector space has an interpolating system, not all vector spaces have an orthogonal interpolating system. When does an OIS exist, and how can we construct it? In the second half of this dissertation, we investigate interpolating systems in finite- dimensional bandlimited spaces, which are parameterized by their DFT support. We pro- vide algorithms for constructing all orthogonal interpolating systems for such spaces, yield- ing a sampling dictionary. The dictionaries exhibit many patterns, making connections with group theory, number theory, and graph theory, specifically: orbit counting, vanishing sums of roots of unity, primes, modular arithmetic, difference sets, cliques, and perfect graphs. All algorithms described have been implemented and can be run online for further investi- gation at the webMathematica-powered website http://www.discretesampling.com. The remainder of the dissertation deals with smaller topics. We briefly analyze sequency- limited spaces, which are defined by their supports in a Discrete Haar Wavelet Transform domain, and give a classification theorem for all orthogonal interpolating systems in terms of posets and Hasse diagrams. Lastly, we draw connections between our work and coding
v theory, and provide thoughts for future research.
LINEAR ALGEBRA NUMBER dot products THEORY circulancy projections eigenvalues mod arithmetic COMBINA- coin changing TORICS Vandermonde null space squarefree RREF primality difference sets enumerationnecklacesposets of DISCRETE
clique problemSAMPLING perfect graphs adjacency matrix DFT
cyclic convolution GRAPH dihedral reproducingwavelets kernel THEORY Polya counting Nyquist-Shannon irreducible poly. vanishing sums of roots of unity HARMONIC ANALYSIS ABSTRACT ALGEBRA
vi Acknowledgments
ELOW ARE THE MAIN PEOPLE AND ORGANIZATIONS that I would like to thank B for helping me through my graduate school career.
Brad Osgood Frank H. and Eva Buck Foundation John Gill Jiehua Chen Thomas Cover Hsin and Frances Wu David Wayne Tillay
Ever since my first interaction with him, my principal advisor Brad Osgood has always been extremely supportive, encouraging, and optimistic at all times. Although I initially lacked the maturity to make a dent in the problems he gave me, Brad trained me, and taught me to organize my thoughts in a disciplined fashion. By following his example, keeping meticulous notebooks, and systematically writing to myself in a conversational style, I learned to think about long-term problems in an incremental and patient fashion. In addition, Brad has been extremely generous in sharing his time, his mental energy, and his ideas. I recall once, in hope of deciphering the underlying pattern behind a problem, Brad typed out fifty pages of special cases! Brad also envisioned the foundational theory of Discrete Sampling, inspired by the possibility of sampling more efficiently in medical imaging applications. I must also thank Brad for introducing me to Mathematica, which
vii has been crucially helpful for doing research. These lessons in writing, discipline, collabo- ration, and mathematical programming that Brad gave me have deeply changed the way I think. Living up to Brad’s example will be an eternal challenge. I am very appreciative to John Gill for all his support over the years. John gave me guidance both before and after the qualifying exams, and kindly helped me in so many ways, including sharing a lifetime’s worth of research problems, encouraging me to learn Perl, and even recovering data from two broken laptops. I have always been impressed by John’s razor sharp mathematical intuition and computer science know-how, having spent many hours in his office watching him at work. More impressively, John demonstrates how to be both humble and indispensable, and I find that very inspiring. Another personal hero of mine is Thomas Cover, who personifies everything that I once fantasized about becoming as a child – a gambler possessing deep mathematical insights, and also being able to explain them with wit, clarity, style, and tremendous humor. We share a love for clever things. I thank him for helping me get into Stanford, rooting for me during the quals, always offering a haven to share ideas and puzzles on Wednesdays, and for the memorable opportunity to serve as his information theory teaching assistant. I want to thank David Wayne Tillay for inspiring me as a middle school student to pursue math and science. In a town without many opportunities for high academic achieve- ment, “Dr. Tillay” was a shiny anomaly. He gave us sophisticated lab equipment, infused humor into the learning process, and created the only science fair our city ever had. With- out Dr. Tillay, I might have ended up as a starving cartoonist instead of a starving graduate student. I am deeply indebted to the Frank H. and Eva Buck Foundation, which somehow se- lected me as a recipient of its scholarship in 1999, most likely due to an accounting error. My entire tertiary education has been funded through their insane generosity and the stew- ardship of Ms. Gloria Brown.
viii I thank a number of researchers in related areas for their consultation and/or giving me public speaking opportunities. Joe Sawada at the University of Guelph was very help- ful in explaining his efficient algorithms for necklace enumeration and identification. Re- searchers who discussed applications with me included Daniel Ensign, Kendall Fruchey, and Daniel Rosenfeld from Stanford’s chemistry department, Daniel M. Spielman from Stanford’s radiological sciences lab, and Babak Hassibi from Caltech’s EE department. Kannan Soundarajan at Stanford lent his expertise in modular arithmetic. I graciously thank Maria Chudnovsky at Columbia University for patiently analyzing the difference graphs with me, and giving me the opportunity to speak in her discrete math seminar. Wal- ter Neumann at Barnard College agreed to an out-of-the-blue discussion with me about cyclotomic polynomials. Sinan Gunturk invited me to give a talk at the Courant Institute, which was very helpful; I also thank Mark Tygert and Ron Peled at the Courant for their insightful discussions. Other academics that invited me to give talks include Julius Smith at CCRMA, Jon Hamkins at JPL, Thierry Klein at Bell Labs, and Amit Singer at Princeton, all providing useful feedback. A short laundry list of friends and family is in order. I thank Joseph C. Koo, Fernando- Gomez Pancorbo, and Jeonghun Noh for many interesting late night discussions that helped us all understand our research better. I am grateful for my outstanding quals study group, consisting of Sumanth Jagannathan, Vahideh Manshaadi, and Piyush Agram. And many thanks to Prashant and Michelle Loyalka, Nikhil Ravi, Omid Azizi, Song Li, Alan Asbeck, Rajiv Agarwal, and Austin Kim for making my time at Stanford much more interesting. I am extremely lucky to have Jiehua Chen as my partner-in-crime in all aspects of life, a true equal in every sense, a person who understands everything I do and then some, a person that cares deeply about me, a person I never expected to meet. Lastly, I thank my parents Hsin and Frances Wu, who sacrificed for me their whole lives, and whose story continues to inspire me.
ix Notation
N = naturals
Z = integers
[n] = integers 1, 2,...,n { }
[a : b] = integers a, a +1,...,b { }
Cn = vector space of n-vectors over the complex field
Y = vector space of signals, a subspace of Cn n = length of discrete signal to be sampled and reconstructed d = dimension of Y
= sampling set – a set of d increasing integers indicating the sampling locations I
= set of interpolating basis vectors u : i U { i ∈ I}
U = matrix whose columns are the vectors in U
( , ) = interpolating system I U
ET = d n matrix such that ET A selects rows of A whose indices are in I × I I
x E = n d matrix such that AE selects columns of A whose indices are in I × I I I = k k identity submatrix k ×
n ei = ith canonical basis vector in C
n CI = coordinate subspace of C defined by span e : i { i ∈ I}
n: n B J = BJ = bandlimited subspace of C whose DFT has support set J
ωn = nth root of unity
= Fourier matrix F = right rotation of the set by τ units Iτ I < = reflection of the set I I K = projection matrix h = first column of a circulant projection matrix K
Z = zero set of h
[k] = kth smallest term in set I I
∆ = the difference set of , defined by i i′ : i, i′ ; i = i′ I I { − ∈ I 6 }
× = 0 . I I\{ } G =(V, E) = graph with vertex set V and edge set E
Φn(x) = nth cyclotomic polynomial o(g) = order of the group element g
xi (a, b) = gcd(a, b) = greatest common divisor of integers a and b
[a, b]= lcm(a, b) = lowest common multiple of integers a and b
φ(n) = Euler-Phi totient function
Z[x] = ring of polynomials with integer coefficients
(Z/sZ)× = multiplicative group of integer residues modulo s
n: n W J = WJ = discrete Haar wavelet subspace of C , spanned by Haar basis vectors whose indices lie in J
xii Contents
Abstract...... iv Acknowledgments...... vii Notation...... x ListofTables ...... xvii ListofFigures...... xviii
1 Introduction 1
2 General Theory of Discrete Sampling 7 2.1 InterpolatingSystems...... 7 2.2 ASimpleExample ...... 9 2.3 InterpolatingSystems...... 12 2.3.1 BasicObservations ...... 12 2.3.2 ExistenceandConstruction...... 16 2.3.3 RelationsBetweenInterpolatingSystems ...... 19 2.3.4 NumericalStability...... 21 2.4 OrthogonalInterpolatingSystems ...... 23 2.4.1 WhyOrthogonal?...... 23 2.4.2 PropertiesofProjectionMatrices...... 25
xiii 2.4.3 FindingandConstructinganOIS ...... 27 2.5 TelegraphicSummaryofChapter2...... 35
3 Discrete Sampling In Bandlimited Spaces 37 3.1 DefinitionofBandlimitedSpaces...... 37 3.2 InterpolatingSystems...... 41 3.2.1 InterchangeDuality...... 41 3.2.2 DihedralSymmetry...... 42 3.2.3 BuildingaSamplingDictionary ...... 45 3.2.4 LiftingandDropping...... 52 3.3 OrthogonalInterpolatingSystems ...... 54 3.3.1 TheCirculantProjectionMatrix ...... 54 3.3.2 DihedralSymmetry...... 60 3.3.3 CliquesinDifferenceGraphs...... 62 3.3.4 FilteringOutEquivalentSamplingSets ...... 66 3.3.5 CompleteOISAlgorithmforBandlimitedSpaces ...... 71 3.3.6 Perfect Graph Conjecture for Bandlimited Spaces ...... 76 3.3.7 h-Nullstellensatz ...... 80 3.3.8 VanishingSumsofRootsofUnity ...... 92 3.3.9 NecessaryConditions...... 95 3.3.10 OrthogonalInterchangeConjecture ...... 97 3.3.11 Fuglede’sConjecture ...... 98 3.4 Prime n ...... 103 3.5 UniformSampling ...... 105 3.5.1 DiscreteNyquist-ShannonSamplingTheorem ...... 105 3.5.2 GeneralizationofNyquist-Shannon ...... 110
xiv 3.5.3 OrthogonalInterpolatingSystems ...... 124 3.6 TelegraphicSummaryofChapter3...... 134
4 Discrete Sampling In Wavelet Spaces 138 4.1 Discrete-TimeHaarWavelets...... 138 4.2 OrthogonalInterpolatingSystems ...... 141 4.2.1 ASeriesofExamples...... 142 4.2.2 CountdownTheorem ...... 156 4.3 TelegraphicSummaryofChapter4...... 166
5 Conclusions and Future Research 167 5.1 AlgebraicCodingTheory...... 169 5.2 EstimationandMachineLearning ...... 171 5.3 CompressedSensing ...... 172 5.4 DiscreteFourierAnalysis...... 173
A Computer Programs for DS 175
B Sub-Nyquist Sampling Puzzle 179
C Necklaces, Bracelets, and Difference Sets 181 C.1 CountingNecklacesandBracelets ...... 181 C.2 EnumeratingNecklaces:FKM ...... 186 C.3 DifferenceSets ...... 187
D Circulant Matrices and the DFT 190
E Orthogonal Interchange Theorem For Three-Dimensional Subspaces 192
xv F Chebotarev’s Theorem 199
G Modular Arithmetic 203
H Sampling Dictionaries for Bandlimited Spaces 206 H.1 Interpolating System Tables Up Through n =9 ...... 206 H.2 Orthogonal Interpolating System Tables Up Through n = 21 ...... 218
Bibliography 250
xvi List of Tables
3.1 Sampling dictionary for bandlimited signals of length n =6...... 47 3.2 Equivalence class represented by six-bead black and white bracelet 0, 1, 3 . 47 { } 3.3 Sampling dictionary for bandlimited signals of length n =8...... 50 3.4 Inverse dictionary for bandlimited signals of length n =6...... 51 3.5 Using Duval signatures to filter unique sets up to rotation,markedinbold. . 70 3.6 Full sampling dictionary for n =6...... 96 3.7 Full sampling dictionary for n =8...... 101 3.8 Orthogonalinterchangedemonstration...... 102 3.9 Sampling dictionary of bandlimited spaces for n =7...... 103 3.10 Spectra reconstructible with = 0, 3, 6, 9 , where n = 12...... 111 I { } 3.11 Consecutive samples always interpolate. Example for n =6...... 114 3.12 Bandlimited subspaces of C8 reconstructible using = 0, 4 ...... 121 I { } 8: 0,2 3.13 Interpolating systems for B { }...... 123 5.1 AnaloguesbetweenDS andAlgebraicCodingTheory...... 170 5.2 Sampling Dictionary for Bandlimited Spaces with n =6...... 171 5.3 Nyquist-Shannon,CompressedSensing,andDS...... 172 5.4 Continuous and discrete versions of Fourier analysis ideas...... 174
xvii List of Figures
1.1 Graphical illustration of Shannon’s sampling theorem...... 3 1.2 Scenarios with more prior knowledge than the highest frequency...... 4 1.3 Find minimum sampling rate allowing reconstruction...... 5 2.1 Satisfaction of interpolation condition in Shannon’s interpolatingsystem. . 14 2.2 Finding diagonal submatrices in K...... 31
12: 0,1,4,5,8,9 3.1 A bandlimited signal from B { }...... 39
12: 0,1,4,5,8,9 3.2 DFT of thesignal from B { }...... 39 3.3 Lifting and dropping of interpolating systems between dictionaries...... 52 3.4 Off-diagonal zeros pulled back to first column by circulancy...... 59 3.5 Detailedviewofzerospulledbacktofirstcolumn...... 63
8: 0,1,4,5 3.6 Difference graph for B { }...... 65
8: 0,1,4,5 3.7 Projection matrix for B { },withtwoOISeshighlighted...... 66 3.8 = 0, 1, 4, 5 and = 0, 3, 4, 7 arerotationallyequivalent...... 67 I1 { } I2 { } 3.9 0,1,5,12 and 0,4,5,9 arerotationallyequivalent...... 69 { } { } 3.10 BlockdiagramforOISsearchalgorithm...... 71
18: 0,1,2,9,10,11 3.11 Difference graph associated with B { } ...... 72
18: 0,1,2,9,10,11 3.12 5-cliques in the difference graph for B { }...... 73
18: 0,1,2,9,10,11 3.13 Orthogonal sampling sets for B { }...... 73
18: 0,1,8,9,10,17 3.14 OIS basis vector for B { }...... 74
xviii 3.15 A menagerie of (perfect) difference graphs for various bandlimited spaces. . 77 3.16 Difference graph for B144:0,27,30,35,60,72,75,83,102,123,131,132. ω(G)= γ(G)=11. 78 3.17 Difference graph for B144:0,16,30,44,58,74,80,94,108,110,124,138. ω(G)= γ(G)=11. 79
8: 0,1,4,5 3.18 Geometric construction of h for B { }. (Multiplicitiesnot shown.) . . . 93
18: 0,1,8,9,10,17 3.19 Geometric construction of h for B { }. (Multiplicities not shown.) 94 3.20 Minimalvanishingsumsofrootsofunity...... 95 3.21 Implications between T1, T2, tiling, and existence of OIS...... 100 3.22 Thediscretesinc,or“dinc”...... 108 3.23 Bandlimited subspaces of C16 that have 0,2,4 as an interpolating system. 124 { } 12: 0,3,6,9 3.24 (Orthogonal) interpolating systems for B { }...... 126
12: 0,3,6,9 3.25 Diagonal submatrices associated with all OISes for B { }...... 127 4.1 Discrete Haar wavelet basis for n =8...... 139 4.2 Discrete Haar wavelet basis for n =8, with certain sequencies chosen. . . 142 4.3 GeneralformofadiscreteHaarouterproduct...... 143
16: 0,0 , 0,1 , 1,0 , 1,1 , 2,0 , 2,2 , 3,5 4.4 Basis vectors for W {{ } { } { } { } { } { } { }}...... 157
16: 0,0 , 0,1 , 1,0 , 1,1 , 2,0 , 2,2 , 3,5 4.5 Hasse diagram for W {{ } { } { } { } { } { } { }}...... 159 4.6 Maximalchainsoftheposet...... 160
16: 0,0 , 0,1 , 1,0 , 1,1 , 2,0 , 2,2 , 3,5 4.7 Projection matrix for W {{ } { } { } { } { } { } { }}...... 161
32: 0,1,2,3,5,7,9,10,11,12,14,15,16,20,24,29,30 4.8 Failed launch in W { }...... 162
32: 0,1,2,3,5,7,9,10,11,12,14,15,16,20,24,29,30 4.9 Projection matrix for W { }...... 163 5.1 Graphicaloutlineofdissertationchapters...... 168 5.2 Intersections of Discrete Sampling with different fields...... 169 A.1 Discretesamplinginrandomnullspaces...... 176 A.2 Interactive visualization of clique-finding algorithm...... 177 A.3 Visualizationoffailedcountdowncriterion...... 178 B.1 Solutiontosub-Nyquistsamplingpuzzle...... 180
xix C.1 Bracelets = 0, 2, 3, 4 and = 0, 3, 4, 5 , for n =8...... 189 I1 { } I2 { }
xx Chapter 1
Introduction
T HE NYQUIST-SHANNONSAMPLINGTHEOREM, proven by Claude Shannon in 1949 [1], is used today in practically all digital communications, imaging, control, audio, and video systems, whenever an analog-to-digital conversion is performed. The theorem may be stated as follows:
1 Theorem 1.0.1. Let − [ 1/2, 1/2] denote the vector space of complex-valued functions F − over the reals whose Fourier Transforms have support restricted to the interval [ 1/2, 1/2]. − 1 Then for all f in − [ 1/2, 1/2], we have the interpolation equation F −
f(t)= f(n) sinc(t n). · − n Z X∈ In other words, a bandlimited signal may be perfectly reconstructed from uniform sam- ples, appropriately spaced, using the shifted sincs1 as a basis. The utility of Shannon’s theorem is that it bridges the gap between the analog and digital worlds. Many information sources, such as audio or natural images, are inherently analog. Unless we first convert
1 sin(πt) sinc(t) := πt . The shifted sincs are also referred to as the cardinal series, or Dirichlet kernel.
1 2 Chapter 1 : Introduction
these analog signals into a digital form, it is not possible to employ digital signal pro- cessing, so the signal must first be both sampled and quantized. The sampling problem is addressed by Shannon’s interpolation equation in Theorem 1.0.1, which writes an analog function f(t) in terms of a discrete vector f(n). Shannon’s underlying assumption that the signal is bandlimited in the Fourier domain is also reasonably satisfied by many infor- mation sources. For example, in telephone communications, one rarely hears frequencies exceeding 8 kHz, so a sampling rate of 16 kHz can be applied. Figure 1.1 graphically illustrates Shannon’s theorem. Our goal is to generalize Shannon’s interpolation equation. We start by re-examining it from a linear algebra perspective, rather than the operational perspective often presented in engineering classes.
Firstly, note that the integer-shifted sinc functions sinc( n) n Z { · − } ∈ 1 constitute an orthogonal basis for the vector space − [ 1/2, 1/2]. F − Thus, any function from this vector space can be decomposed as
f(t)= f, sinc(t n) sinc(t n). h − i − n Z X∈ Expanding the L2 inner product, this can be rewritten as
∞ f(t)= f(u) sinc(t u n)du sinc(t n). Z − − − n −∞ X∈ Z Direct evaluation of these integrals requires knowing the entire realization of f, since the integrands contain f(u) as u ranges over all R. However, Shannon’s theorem reduces these integrals to mere samples, so that we need not look at all of f. Motivated by the desire to generalize, we are led to a natural problem:
Problem What vector spaces admit an orthogonal basis where inner products are replaced 3
x mT x m ( ) = [ ] ∞ 2 X(f) = x(t)ei πft dt 1 !−∞ T = x(t) 2W t W f T 0
πt sinc t sin( ) x[m] = x(mT ) ( ) := πt
t t T
Superposition
t − mT x t x mT sinc ( ) = ( ) T !m " #
t
Figure 1.1: Graphical illustration of Shannon’s sampling theorem.
with samples?
Shannon’s sampling theorem answers this question for a specific case: namely, if the vector
1 space is − ([ 1/2, 1/2]), then the shifted sincs are the orthogonal basis, and the inner F − products are replaced with samples taken at the integers. However, are there other vector spaces that also have Nyquist-Shannon style interpolation equations? Our question is motivated not only theoretically, but practically by a number of appli- cations. Figure 1.2 illustrates several of them. For example, in spectroscopy, the chemicals 4 Chapter 1 : Introduction
music spectroscopy structural engineering
communications chemistry medicalimaging
Figure 1.2: Scenarios with more prior knowledge than the highest frequency.
being analyzed and their associated spectra are known beforehand. In structural engineer- ing, when a structure is excited, the acceleration signal viewed in the Fourier domain has peaks concentrated about the natural frequencies, which are computed a priori. In chem- istry, the technique known as two-dimensional infrared rephasing produces a signal whose two-dimensional Fourier Transform has four distinct spectral islands, as shown. In general, one can imagine many scenarios where more is known than simply the highest frequency. As a final example, consider Figure 1.3. The problem here, found in many signal processing textbooks and exams, is to calculate the minimum sampling rate that allows the signal s(t) to be perfectly reconstructed. The knee-jerk wrong approach is to apply Shannon’s sampling theorem. The highest frequency present is f0 + B Hz, so we will sample at 2(f0 + B) Hz. This is incorrect because Shannon’s theorem implicitly 5
assumes that all frequencies below the highest frequency are activated. However, in this case, the spectral gap yields unused degrees of freedom that can be exploited to sample and reconstruct at sub-Nyquist rates. See Appendix B for the solution.
S(f)
f f f − 0 0 2B 2B
Figure 1.3: Find minimum sampling rate allowing reconstruction.
Having presented the motivations behind our research, the remainder of this disser- tation is as follows. In Chapter 2, we describe the general theory of what we are call- ing Discrete Sampling. By restricting ourselves to finite dimensional vector spaces, all functions that we wish to reconstruct can be considered discrete vectors, and hence the name. The key definition here is that of an interpolating system, a generalization of the Nyquist-Shannon interpolation equation to abstract vector spaces. We address questions of existence and uniqueness of interpolating systems, and the ramifications of insisting on an orthogonal basis. In Chapter 3, we conduct an in-depth investigation of the interpolating systems for bandlimited spaces, which are vector spaces consisting of signals whose Dis- crete Fourier Transforms have supports restricted to a particular discrete set. The results here can be grouped into two categories: (1) computational algorithms for finding interpo- lating systems, and (2) structure of interpolating systems. Chapter 4 is a short investigation of orthogonal interpolating systems in sequency-limited spaces, in which the discrete Haar 6 Chapter 1 : Introduction
wavelet transform of the signals lies in a prespecified support. Lastly, Chapter 5 contains concluding remarks and ideas for future work.
CH 1: Introduction and Motivations f(t)= n Z f(n) sinc(t n) . ∈ · − P CH 2: Discrete Sampling f = f[i]u , u are i { i} ⊥ i (Orthogonal) Interpolating Systems X∈I
CH 3: Discrete Sampling in 0-1 1 h construct freqs −1 Bandlimited Spaces (n, freqs ) characteristic F find zeroes difference sequence −1 graph h[0]
reconstruction kernel ψ
Sampling Theorems filter cyclic/dihedral append find cliques Computation (orthogonal bases) equivalences (Duval) zeroes of size d-1 {0,2,3,5} {0,2,3,5}, {0,3,5,6} {} sampling , sequences 11
5 17 "Sampling Dictionary for n=\!\H6\L" 1.0 "subspace" "IS" "OIS" 0 0 8 < 8< 88 << 0, 1 0, 1 , 0, 2 0, 3 0.5 8 < 88 < 8 << 88 << 12 0, 2 0, 1 , 0, 2 8 < 88 < 8 << 88<< 0, 3 0, 1 , 0, 3 7 15 8 < 8< 88 < 8 << 6 0, 1, 2 0, 1, 2 , 0, 1, 3 , 0, 2, 3 0, 2, 4 Properties and Classification 8 < 88 < 8 < 8 << 88 << 0, 1, 3 0, 1, 2 , 0, 1, 3 , 0, 2, 3 -1.0 -0.5 0.5 1.0 8 < 88 < 8 < 8 << 88<< 0, 2, 3 0, 1, 2 , 0, 1, 3 , 0, 2, 3 8 < 88 < 8 < 8 << 88<< 0, 2, 4 0, 1, 2 , 0, 2, 4 3 8 < 8< 88 < 8 << 13 -0.5 0, 1, 2, 3 0, 1, 2, 3 , 0, 1, 2, 4 , 0, 1, 3, 4 8 < 88 < 8 < 8 << 88<< 0, 1, 2, 4 0, 1, 2, 3 , 0, 1, 2, 4 9 8 < 88 < 8 << 88<< 1 0, 1, 3, 4 0, 1, 2, 3 , 0, 1, 3, 4 8 < 88 < 8 << 88<< 0, 1, 2, 3, 4 0, 1, 2, 3, 4 -1.0 8 < 88 << 88<< 0, 1, 2, 3, 4, 5 0, 1, 2, 3, 4, 5 8 < 8< 88 <<
80, 0<
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
0 5 - 3 1 1 1 1 1 1 000000000000000000000000 b1 = b0,1 b2 = b1,0 8 8 8 8 8 8 8 8 1 - 3 5 1 1 1 1 1 1 000000000000000000000000 8 8 8 8 8 8 8 8
2 1 1 1 1 1 1 1 1 000000000000000000000000 0.6 0.6 8 8 8 8 8 8 8 8 3 1 1 1 1 1 1 1 1 000000000000000000000000 80, 1< 8 8 8 8 8 8 8 8 4 1 1 1 1 3 3 - 1 - 1 000000000000000000000000 0.4 0.4 8 8 8 8 8 8 8 8 5 1 1 1 1 3 3 - 1 - 1 000000000000000000000000 8 8 8 8 8 8 8 8 0.2 0.2 1 1 1 1 - 1 - 1 3 3 6 000000000000000000000000 8 8 8 8 8 8 8 8
7 1 1 1 1 - 1 - 1 3 3 000000000000000000000000 0.0 m 0.0 m 8 8 8 8 8 8 8 8 8 0 0 0 0 0 0 0 0 100000000000000000000000 -0.2 0 5 10 15 -0.2 0 5 10 15 81, 0< 81, 1< 9 0 0 0 0 0 0 0 0 0 10000000000000000000000 10 0 0 0 0 0 0 0 0 0 0 1 1 00000000000000000000 2 2
11 0 0 0 0 0 0 0 0 0 0 1 1 00000000000000000000 -0.4 -0.4 2 2 12 000000000000 1 1 000000000000000000 2 2 -0.6 -0.6 13 000000000000 1 1 000000000000000000 2 2
14 00000000000000 1 1 0000000000000000 CH 4: 2 2 15 00000000000000 1 1 0000000000000000 83, 1< 84, 0< 82, 1< 83, 4< 82, 3< 2 2 16 0000000000000000 7 - 1 - 1 - 1 1 1 1 1 0 0 0 0 0 0 0 0 8 8 8 8 8 8 8 8
17 0000000000000000 - 1 7 - 1 - 1 1 1 1 1 0 0 0 0 0 0 0 0 b = b b = b 8 8 8 8 8 8 8 8 3 1,1 5 2,1 1 1 3 3 1 1 1 1 18 0000000000000000 - - 0 0 0 0 0 0 0 0 8 8 8 8 8 8 8 8
19 0000000000000000 - 1 - 1 3 3 1 1 1 1 0 0 0 0 0 0 0 0 8 8 8 8 8 8 8 8 0.6 0.6 1 1 1 1 1 1 1 1 Discrete Sampling in Wavelet Spaces 20 0000000000000000 0 0 0 0 0 0 0 0 8 8 8 8 8 8 8 8
83, 2< 83, 3< 84, 8< 83, 6< 83, 7< 21 0000000000000000 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0.4 0.4 8 8 8 8 8 8 8 8 22 0000000000000000 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 8 8 8 8 8 8 8 8
23 0000000000000000 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0.2 0.2 8 8 8 8 8 8 8 8 24 000000000000000000000000 1 1 0 0 0 0 0 0 2 2
0.0 m 0.0 m 25 000000000000000000000000 1 1 0 0 0 0 0 0 2 2 -0.2 0 5 10 15 -0.2 0 5 10 15 26 00000000000000000000000000 1 0 0 0 0 0 84, 4< 84, 13< 84, 14< 27 000000000000000000000000000 1 0 0 0 0 -0.4 -0.4 28 0000000000000000000000000000 1 0 0 0 29 00000000000000000000000000000 1 0 0
30 000000000000000000000000000000 1 1 -0.6 -0.6 2 2 31 000000000000000000000000000000 1 1 2 2 Chapter 2
General Theory of Discrete Sampling
2.1 Interpolating Systems
1 To reconstruct signals from the vector space − [ 1/2, 1/2], Shannon’s sampling theorem F − provides us with two pieces of information:
where to take samples (answer: the integers), and •
how to reconstruct from the samples (answer: shifted sincs). •
Thus, to generalize this, an interpolating system for an arbitrary vector space Y should be a pair ( , ), where is the sampling set and is the interpolating basis. I U I U
Definition 2.1.1 (Interpolating System (IS)). Let Y be a d-dimensional subspace of Cn.1 An interpolating system for Y is a pair ( , ), where I U
is the sampling set, a set of d strictly increasing integers in [1 : n], and • I 1Here we assume a complex vector space, but in fact any scalar field will do.
7 8 Chapter 2 : General Theory of Discrete Sampling
:= u : i is the interpolating basis: a basis such that for all f Y, • U { i ∈ I} ∈
f = f[i]ui. (2.1) i X∈I
A few remarks are in order. Firstly, the key point about Equation 2.1 is that the co-
efficients scaling the ui’s are restricted to be samples. By the definition of a basis, every vector f Y can be written as some linear combination f = α u . However, here we ∈ i i add the constraint that the scaling coefficients αi must be samplesP f[i] i . Secondly, { } { } ∈I the “sample” f[i] refers to the ith coordinate of the n-vector f; in other words, the inner product of f with the ith canonical basis vector.2 Lastly, Equation 2.1 may be alternatively written in matrix form. Let U 3 be the n-by-d matrix whose columns are u : i . Let { i ∈ I} ET be the n-by-d 0-1 matrix that operates on the left and selects those rows whose indices I lie in . (For example, if = 2, 3, 5 , then ET f selects the 2nd, 3rd, and 5th entries of I I { } I f.) Then Equation 2.1 becomes
f = UET f, f Y. (2.2) I ∀ ∈
Rearranging Equation 2.2, we can characterize interpolating systems in terms of the nullspace:
Definition 2.1.2 (Nullspace Characterization). ( , ) is an interpolating system for Y if U I T and only if Y = (In UE ), where In is the n-by-n identity matrix. N − I Proof. Rearranging Equation 2.2,
(I UET )f =0, f Y. − I ∀ ∈ 2Alternatively, we could have defined our samples f[i] to be inner products with some general basis n vectors B = b1,..., bn that span C . However, we choose the canonical basis here without loss of generality. { } 3 Notation: when referring to the set of interpolating basis vectors ui : i , we will write . However, when referring to the n d matrix whose columns are these basis vectors,{ we∈ I} will write U. U × 2.2 : A Simple Example 9
Hence, Y (I UET ). Conversely, if f (I UET ), then ⊆N − I ∈N − I
f = f[i]ui i X∈I and f lies in the span of the vectors u : i , which, by the definition of an interpolating { i ∈ I} basis, must constitute a basis for Y. Thus, f Y, and (I UET ) Y. ∈ N − I ⊆ To make our interpolating systems more similar to Shannon’s, we could also insist that the vectors in := u : i form an orthogonal basis, just as the shifted sincs are U { i ∈ I} orthogonal. This is equivalent to saying that the columns of U are orthogonal. Hence we have the following definition:
Definition 2.1.3 (Orthogonal Interpolating System (OIS)). An orthogonal interpolating system is an interpolating system ( , ) in which the basis vectors u ’s are orthogonal. I U i Furthermore, a sampling set that belongs to an OIS is called an orthogonal sampling I set, and a basis that belongs to an OIS is called an orthogonal interpolating basis. U
2.2 A Simple Example
We now give some examples of interpolating systems. Consider the vector space
2x1 2x4 + x5 =0 Y := subspace of R5 given by − − 2x x +2x + x + x =0. 1 − 2 3 4 5 In other words, Y is the three dimensional subspace of R5 given by the intersection of two arbitrary hyperplanes. The first interpolating system we present is as follows:
: 1, 2, 3 • I { } 10 Chapter 2 : General Theory of Discrete Sampling
: • U 1 0 0 0 1 0 u1 = 0 , u2 = 0 , u3 = 1 4 1 2 − 3 3 − 3 2 2 4 − 3 3 − 3
We take samples in slots 1, 2, and 3, and reconstruct with basis vectors u1,u2, and u3. Notice the presence of an identity submatrix in U, in the first, second, and third slots. Assuming that u ,u , and u indeed constitute a basis for Y, it quickly follows that ( , ) 1 2 3 I U must be an interpolating system. To see this, consider synthesizing an arbitrary vector f Y using this basis, so that f = α u + α u + α u . Since u and u have no ∈ 1 1 2 2 3 3 2 3 contribution to the first slot of f, it follows that α1 must equal f[1]. Similarly, α2 = f[2],
and α3 = f[3]. This synthesis is illustrated below:
1 1 0 0 − 1 0 1 0 Y 3 = 1 0 + 1 0 + 3 1 ∋ − 1 4 1 2 − 3 − 3 3 − 3 8 2 2 4 − 3 − 3 3 − 3
Now here is another example of an interpolating system, for the same space:
: 1, 3, 5 • I { } 2.2 : A Simple Example 11
: • U 1 0 0 3 1 2 2 u1 = 0 , u3 = 1 , u5 = 0 1 1 0 − 2 0 0 1 This interpolating system is an improvement over the previous one in the sense that the sampling period has doubled, so a clock controlling the sampler can run at a slower rate. Again, notice the presence of an identity submatrix in , namely in slots 1, 3, and 5. Thus U we can also synthesize vectors in Y using samples from slots 1, 3, and 5 as coefficients:
1 1 0 0 − 3 1 1 2 2 8 Y 3 = 1 0 + 3 1 + 0 ∋ − −3 1 1 1 0 − 3 − 2 8 0 0 1 − 3 Reflecting on this example, some natural questions come to mind:
1. How does one find an interpolating system?
2. What is the relationship between and ? I U
3. How do we even know whether an interpolating system exists?
4. The bases shown here are not orthogonal. When does an orthogonal interpolating system exist?
0,1,4,5 { } 12 Chapter 2 : General Theory of Discrete Sampling
2.3 Interpolating Systems
In this section, we will prove the existence of interpolating systems in any vector space, prove their uniqueness up to the choice of sampling set, and establish various relations obeyed by interpolating systems. We start with a few preliminary lemmas. The following lemma states that the identity submatrix structure we have seen is not only sufficient but also necessary.
2.3.1 Basic Observations
Lemma 2.3.1 (Interpolation Condition). A basis := u : i for Y is an interpolat- U { i ∈ I} ing basis if and only if u [i]= δ i, j . j ij ∈ I (This condition can equivalently be written in matrix form:
T E U = Id, I
where I is the d d identity submatrix.) d ×
Proof. First, suppose := u is an interpolating basis. Then for j , U { i ∈ I} ∈ I
u = u [i]u , j . j j i ∈ I i X∈I
Since the ui’s are a basis, they are linearly independent, so
u [i]= δ , i, j . j ij ∈ I
For the converse, suppose u [i]= δ for i, j . Since is a basis, any vector f Y can j ij ∈ I U ∈ 2.3 : Interpolating Systems 13
be written as a linear combination of the ui’s:
f = αiui. i X∈I Thus, for any j , ∈ I f[j]= αiui[j]= αiδij = αj. i i X∈I X∈I We thus have the interpolation equation
f = αiui = f[i]ui i i X∈I X∈I and so is an interpolating basis. U
Lemma 2.3.1 must be satisfied by any interpolation method, such as Lagrange interpo- lation for instance. Shannon’s interpolating basis also satisfies this property, as shown in Figure 2.1. For if we evaluate the sinc function at the sampling locations, the result is a Kronecker delta. Now for some immediate consequences. Using Lemma 2.3.1, the next corollary easily follows:
Corollary 2.3.2. An interpolating system is determined by its sampling set. That is, if ( , ) and , ) are two interpolating systems for Y, where = u : i and I U I V U { i ∈ I} = v : i , then u = v for all i . V { i ∈ I} i i ∈ I
Proof. Suppose j . Since ( , ) is an interpolating system, u may be synthesized as ∈ I I V j
uj = uj[i]vi. i X∈I 14 Chapter 2 : General Theory of Discrete Sampling
æ
æ æ æ æ æ æ æ æ æ æ -5 -4 -3 -2 -1 1 2 3 4 5
Figure 2.1: Satisfaction of interpolation condition in Shannon’s interpolating system.
From Lemma 2.3.1, uj[i]= δji. Thus,
uj = δjivi = vj. i X∈I
Thanks to Corollary 2.3.2, when we say that an interpolating system for Y is a pair ( , ), the can be discarded, since is uniquely determined by . Thus, when con- I U U U I venient, we will simply say that is an (orthogonal) interpolating system for Y. This I abbreviated phrasing will be used more in Chapter 3. Lemma 2.3.1 also allows us to explain some extreme cases. Suppose the signal sub-
n space Y is simply a coordinate subspace of C . That is, for some index set , Y = CJ := J span e : j . The following lemma asserts that the only interpolating system ( , ) j ∈J I U is the obvious one, where = and = e : i . I J U { i ∈ I}
Lemma 2.3.3. Any basis vector ek lying in Y is an element of any interpolating system 2.3 : Interpolating Systems 15
( , ). I U
Proof. If e Y, then k ∈ ek = ek[i]ui = δkiui. i X X∈I Hence k and e = u . ∈ I k k The following Lemma ensures that a certain matrix form is well-defined. It can be skipped on the first reading.
Lemma 2.3.4. Let R be an n d matrix whose columns form a basis for Y Cn. If is × ∈ I an index set of cardinality d such that ET R is invertible, then the matrix I
T 1 R(E R)− I
is uniquely determined by , independent of the choice of basis for Y. I
T 1 T 1 T Proof. Let U = R(E R)− , and let V = S(E S)− . First observe that multiplying by E I I I yields
T T T 1 T T T 1 Id = E U = E R(E R)− Id = E V = E S(E S)− I I I I I I where I is the d d identity submatrix. Thus U and V both contain an identity matrix in d × the rows indexed by . If we let [p] denote the pth entry in the (ordered) index set , then I I I we can write u [ [p]] = v [ [p]] = δ , p [1 : d]. j I j I jp ∈ Now suppose that U = V . Then there is some column index j [1 : d] such that if u 6 ∈ j is the jth column of U, and if v is the jth column of V , then u = v . This in turn implies j j 6 j existence of a row index k [1 : N] such that u [k] = v [k]. Then u / span v since ∈ j 6 j j ∈ { j} u [k] = v [k] while 1= u [ [j]] = v [ [j]]. We now argue that u / span v , v ,...,v j 6 j j I j I j ∈ { 1 2 d} 16 Chapter 2 : General Theory of Discrete Sampling
as well. To see this, suppose we could write
d
uj = αivi i=1 X where for some p [1 : d] such that p = j, it holds that α =0. Then ∈ 6 p 6
d d d T e [p] αivi = αivi[ [p]] = αiδip = αp I I i=1 i=1 i=1 X X X by the identity submatrix property mentioned earlier. However,
u [ [p]] = δ =0. j I jp
It follows from the above analysis that the column vectors of U and V do not span the same space. However, these column vectors were both supposed to constitute a basis for Y, which is a contradiction. Thus, we must have U = V .
2.3.2 Existence and Construction
We now give three different proofs for the existence of interpolating systems in any vector space.
Theorem 2.3.5 (Existence of Interpolating Systems). Let Y be a subspace of Cn. Then Y has an interpolating system.
Algebraic Proof. Let R be an n d matrix whose columns form a basis for Y. Since R × has rank d, there exists a d d invertible submatrix ET R, for some index set . Define × I I T 1 = R(E R)− ; by Lemma 2.3.4, this matrix is well-defined despite the freedom in the U I 2.3 : Interpolating Systems 17
choice of R. Then
T T T 1 T T 1 E = E (R(E R)− )=(E R)(E R)− = Id I U I I I I
Thus the matrix spans Y and satisfies Lemma 2.3.1. So ( , ) is an interpolating system U I U for Y.
Geometric Proof. Again we will provide an explicit interpolating system, but this time in
n terms of geometric projections. First observe that C = Y CJ for some index set of ⊕ J n d strictly increasing integers in [1 : n]. This can be argued inductively. If Cn = Y, −
then the claim is vacuously true. Otherwise, there exists a canonical basis vector ej1 not contained in Y, so we can create the larger subspace Y span e . This continues until ⊕ j1 we obtain Cn = Y span e , e ,..., e . ⊕ j1 j2 jn−d n o n Let P : Y CJ Y be the projection of C down to Y. Then for f Y, ⊕ → ∈
f = P f n
= P f[i]ei i=1 ! n X
= f[i]P ei i=1 Xn
= f[i]P ei i/ X∈J where the last line follows since P e = 0 for i . Defining := [1 : n] , and i ∈ J I \J ui := P ei,
n
f = f[i]ui i X∈I 18 Chapter 2 : General Theory of Discrete Sampling
and we have an interpolating system.
RCEF Proof. Let R be an n d matrix whose columns are a basis for Y. By the Reduced × Column Echelon Form theorem, there exists a sequence of elementary column operations such the resulting matrix U is still of rank d and contains a d d identity submatrix (the piv- × ots). For example, in the equation below, we can add and subtract multiples of columns in the matrix on the left, to produce the matrix on the right. Notice how we have reinterpreted the pivots as the locations in which to take samples.
4 60 1 0 0 − 1 61 5 0 1 0 arbitrary interpolating RCEF = 1 17 1 0 0 1 = basis −→ basis 9 25 19 5 11 − 8 16 8 1 1 1 2 14 2 − 4 8 − 4 Since elementary column operations do not change the column space, the columns of U are still a basis for Y. Therefore, if contains the row-indices of the pivots, and is the set of I U columns of U, then ( , ) is an interpolating system. I U
The algebraic proof of Theorem 2.3.5 shows existence via an explicit construction of an interpolating system ( , ). The result from this proof is worth summarizing as follows: I U
Theorem 2.3.6 (IS Construction). Let R be an n d matrix whose columns form a basis × for Y. Then every interpolating system ( , ) is of the form I U
T 1 U = R(E R)− I where is chosen such that det(ET R) =0. I I 6 2.3 : Interpolating Systems 19
To recap, Theorem 2.3.5 guarantees the existence of an interpolating basis. Lemma 2.3.1 says that the interpolating basis behaves like the canonical basis in the sampling locations . Thus, an interpolating basis is a perturbation of the canonical basis. This is I worded more precisely in the following corollary.
Corollary 2.3.7. For any subspace Y Cn, there exists an index set , dim(Y ) = , ∈ I |I| such that Y = span e + v : i { i i ∈ I} where v Cn span e : i . The e + v are an interpolating basis for Y, and any i ∈ \ { i ∈ I} i i interpolating basis can be written in this form.
2.3.3 Relations Between Interpolating Systems
Lastly, we are now equipped to address relations between interpolating systems. Recall that the simple example from Section 2.2 showed two different interpolating systems for the same subspace. How are they related?
Theorem 2.3.8. Let ( , ) and ( , ) be two interpolating systems for the same subspace I U J V Y, with corresponding interpolating basis matrices U and V . Then
T 1 V = U(E U)− . I
Proof. Again we use the trick of writing one basis in terms of the other. Let := u : i U { i ∈ and := v : j . Then since the u ’s are an interpolating basis, we can write, for I} V { j ∈J} i each j , ∈J vj = vj[i]ui. i X∈I 20 Chapter 2 : General Theory of Discrete Sampling
Written in matrix form, this equation translates to
V = U(ET V ). I
Multiplying on the left by ST , J
ST V =(ST U)(ET V ). J J I
T T By the matrix form of Lemma 2.3.1, S V = Id. Therefore (S U) is invertible, and its J J inverse is equal to (ET V ). So I
T T 1 V = U(E V )= U(S U)− . I J
If two interpolating systems for the same subspace Y share the same interpolating basis vectors, but possibly use different sampling sets, then we can make a statement about the periodicity of signals in Y.
Theorem 2.3.9. Suppose ( , ) and ( , ) are interpolating systems for Y, where = I U J V U u : i and = v : i , and = . If u = v for an i and j , then { i ∈ I} V { j ∈J} U V i j ∈ I ∈J f[i]= f[j] for all f Y. ∈ Proof. If i = j, then the conclusion is trivially true. Define = k : u = v , and K { k k} ′ = [1 : n] . Then K \K f = f[k]uk + f[i]ui k i ′ X∈K ∈I∪KX and
f = f[k]vk + f[j]vj. k j ′ X∈K ∈JX ∪K 2.3 : Interpolating Systems 21
Subtracting,
f[k]u f[k]v = f[j]v f[i]u . k − k j − i k k j ′ i ′ X∈K X∈K ∈JX ∩K ∈I∩KX By definition of , the left-hand side is zero. On the right-hand side, since = , then K U V for every j ′ there is a corresponding i ′ with v = u . Combining the ∈J ∩K ∈I∩K j i corresponding terms, we have
0= (f[j] f[i])v . − j j X Hence all the coefficients are zero, which is the conclusion of the lemma.
2.3.4 Numerical Stability
Lastly, we address the problem of finding interpolating systems that are numerically stable. We thank Mark Tygert at the Courant Institute for bringing the following result to our attention, from his work on interpolative decompositions [2]. In a nutshell, it says that the entries of the interpolating basis can be ensured to have absolute value less than 1 as long as the submatrix of R with maximum absolute determinant is chosen. The sampling points which achieve this maximum are also called the Fekete points [?].
Theorem 2.3.10. For any finite-dimensional vector space Y, there exists an interpolating system ( , ) such that u [m] 1 for all m [n] and i . I U | i | ≤ ∈ ∈ I
Proof. Let R be a n d matrix whose columns are a basis for Y. By Theorem 2.3.6, an × interpolating system ( , ) satisfies the equation I U
T 1 U = R(E R)− I 22 Chapter 2 : General Theory of Discrete Sampling
where det(ET R) =0. Undoing the matrix inverse, I 6
U(ET R)= R. (2.3) I
Since ET R is nonsingular, the entries of U can be expressed using Cramer’s rule. Let the I T T columns of U be denoted by u1,u2,...,ud. Let (E R)j i be the matrix E R but whose I → I jth row has been replaced with the ith row of R. Then by applying Cramer’s rule to the ith row of both sides of Equation 2.3,
T det((E R)j i) u [i]= I → . (2.4) j det(ET R) I
If we define j i := ( [j] ) i , where [j] is the jth smallest term in , then I → I \{I } ∪{ } I I Equation 2.4 can be rewritten as
T det(E j i R) u [i]= I → . (2.5) j det(ET R) I
T Let the sampling set which maximizes det(E R) be denoted by ∗; such a maximum I | I | I exists since there are only a finite number of choices for , and is nonzero by Theorem I 2.3.5. Under this choice of interpolating system,
T det(E j→i R) u [i] = I . j T det(E ∗ R) | I |
But j i is just another sampling set. Therefore, I →
T det(E j→i R) u [i] = I 1. j T det(E ∗ R) ≤ | I |
2.4 : Orthogonal Interpolating Systems 23
2.4 Orthogonal Interpolating Systems
While every subspace has an interpolating system, not every subspace has an orthogonal in- terpolating system (OIS). For example, recall the three-dimensional subspace of R5 given in Section 2.2. From Corollary 2.3.2, the sampling sequence alone characterizes an inter- 5 polating system. By brute-force enumeration of all 3 = 10 possible sampling sets of size 3, one discovers that some of these sampling sets are associa ted with interpolating systems, but none of them are associated with orthogonal interpolating systems.
2.4.1 Why Orthogonal?
There are many reasons why orthogonal interpolating systems are preferable to non-orthogonal ones. Consider the drawbacks of finding interpolating systems via Theorem 2.3.6, which says that if we can find such that ET R is nonsingular, then ( , ) is an interpolating I I I U T 1 system where U = R(E R)− . Problems: I
T 1 1. Matrix Inversion: We would prefer to avoid computation of the matrix inverse (E R)− I if possible, due to the possibility that ET R is ill-conditioned. I
n 2. Search Complexity: Finding a nonsingular submatrix involves computing d deter- minants in the worst case.
3. Sampling Noise: In a noiseless setting, interpolating systems allow us to reconstruct
signals perfectly: i f[i]ui = f. However, suppose we have additive Gaussian ∈I noise whenever a sampleP is taken. Then by linearity,
(f[i]+ (0, σ2))u = f + (0, σ2)u . N i N i i i X∈I noise X∈I error | {z } | {z } 24 Chapter 2 : General Theory of Discrete Sampling
This can also be written in matrix form:
f = ET (f + (0, σ2))= f + ET (0, σ2). U I N U I N noise | {z } By insisting on orthogonal interpolating systems, the problems enumerated above can be alleviated. No matrix inversion is required to compute the matrix U in an OIS. Search complexity will be reduced because no determinants are required – and in the case of bandlimited spaces (see Chapter 3), more dramatic complexity reductions are possible. Lastly, by insisting that the columns of U are orthogonal, we can reduce the norm of the error term due to sampling noise. Also, if the sampling noise is white, using an orthogonal basis will preserve the whiteness of the noise. The reader may wonder why we do not insist on orthonormal interpolating systems, in which each of the basis vectors ui have norm one. The following result shows that orthonormality is apparently too restrictive to be interesting in finite dimensional vector spaces, so we must settle for orthogonality.
Theorem 2.4.1 (Only Coordinate Subspaces Have Orthonormal Interpolating Systems). The following are equivalent:
(a) Y has a normal interpolating system; that is, all basis vectors have length 1, though are not necessarily orthogonal.
(b) Y has an orthonormal interpolating system.
(c) Y = CI for some index set I Proof. It is clear that (c) = (b) = (a). The implication (a) = (c) uses ⇒ ⇒ ⇒ Lemma 2.3.1: Let i . The corresponding vector u in an interpolating basis for Y has 0 ∈ I i0 u [i]= δ , i . We must have u [k]=0 for the remaining components, k [1 : n] , i0 i,i0 ∈ I i0 ∈ \I 2.4 : Orthogonal Interpolating Systems 25
for otherwise u would be greater than 1, contrary to (a). Thus u = e . Since i || i0 || i0 i0 0 ∈ I was arbitrary we conclude that Y = CI.
2.4.2 Properties of Projection Matrices
The principal object of concern in our study of orthogonal interpolating systems will be the projection operator K : Cn Y, which takes an n-vector and computes its orthogonal → projection down to the subspace Y. We first list some useful properties of K.
Definition 2.4.2. Let w , w ,..., w be an orthonormal basis for Y. The associated { 1 2 d} projection matrix is the n n matrix K whose m, n entry is given by ×
d
K[m, n]= wj[m]wj[n] . (2.6) j=1 X
Equation 2.6 can also be written as a sum of tensor products:
d K = w w , j ⊗ j j=1 X or as a sum of outer products, d
K = wjwj∗ , j=1 X regarding w as a column vector. Or, in matrix form, if W is the n d matrix whose j × columns are the wj then
K = W W ∗ .
2 The projection property K = K is also easily verified using the representation K = VV ∗, 26 Chapter 2 : General Theory of Discrete Sampling
as follows:
2 K =(W W ∗)(W W ∗)
= W (W ∗W )W ∗
= W IW ∗ (W ∗W = I since the columns of W are orthonormal)
= W W ∗ = K.
Lemma 2.4.3 (Reproducing Property). Let K : Cn Y, and let f Y . Then → ∈
f,K[ , n] = f[n]. h · i
Proof.
f,K[ , n] = f,Ke = K∗f, e = Kf, e = f, e = f[n]. h · i h ni h ni h ni h ni
The following lemma states that the projection matrix K depends only on the subspace Y, and not on the choice of orthonormal basis for Y. That is, if R is any matrix whose columns are an orthonormal basis for Y, then K = RRT .
Lemma 2.4.4 (Uniqueness of K). K is invariant over all choices of orthonormal bases for Y.
Proof. Let v ,...,v and w ,...,w be two different orthonormal bases for Y. Let { 1 d} { 1 d} 2.4 : Orthogonal Interpolating Systems 27
d d K = j=1 vjvj∗ and let K′ = j=1 wjwj∗. Then P P 2 K[ , n] K′[ , n] = K[ , n] K′[ , n],K[ , n] K′[ , n] k · − · k h · − · · − · i = K[ , n],K[ , n] K[ , n],K′[ , n] h · · i−h · · i K′[ , n],K[ , n] + K′[ , n],K′[ , n] −h · · i h · · i = K[n, n] K[n, n] K′[n, n]+ K′[n, n] − − =0.
2.4.3 Finding and Constructing an OIS
Theorem 2.4.5 (OIB Construction). Every orthogonal interpolating system ( , ) for Y is U I of the form K[ , i] u = · , i . i K[i, i] ∈ I
T 1 (This is an alternative formula to R(E R)− .) I
Proof. Since u : i is an orthogonal basis for Y, an orthonormal basis is given by { i ∈ I} ui u : i . We can then form the (unique) projection matrix K for Y as { k ik ∈ I}
u u K = i i∗ . u u i i i X∈I k k k k 28 Chapter 2 : General Theory of Discrete Sampling
Suppose n . Then the nth column of K is ∈ I u u K[ , n]= i i∗ e · u u n i i i X∈I k k k k u u [n] = i i u u i i i X∈I k k k k u δ = i in u u i i i ∈I k k k k Xu = n . u 2 k nk Note that u [n] δ 1 K[n, n] = eT K[ , n]= n = nn = . n · u 2 u 2 u 2 k nk k nk k nk It follows that K[ , n] · = u . K[n, n] n Since K is unique by Lemma 2.4.4, any two orthogonal interpolating bases with the same sampling index set must be the same.
Since all orthogonal interpolating systems are constructed from columns of K, one could try searching through different subsets of d columns, and testing each subset for orthogonality. The following theorem exploits the reproducing properties of K to simplify this search process.
Theorem 2.4.6 (Identity Submatrix Test for Orthogonal Interpolating Systems). Let K : Cn Y. Define K := ET KE . Then Y has an orthogonal interpolating system with → I I I sampling set if and only if K is a diagonal matrix. I I
Proof. We first argue necessity. By Lemma 2.4.5, the orthogonal interpolating system with 2.4 : Orthogonal Interpolating Systems 29
K[ ,i] sampling set is u = · : i . From Lemma 2.3.1, we must have I { i K[i,i] ∈ I}
K[j, i] δ = u [j]= i, j ij i K[i, i] ∈ I which we can rewrite as
K[i, i]δ = K[j, i] i, j . ij ∈ I
Thus K is a diagonal matrix. I
To show sufficiency, suppose that K is a diagonal matrix. Then for m, n , by the I ∈ I reproducing property (Lemma 2.4.3), K[m, m]δ = K[m, n]=(K[ , m],K[ , n]), and mn · · thus the vectors in K[ , i] : i are orthogonal. The norms of these vectors are given { · ∈ I} by K[ , i] 2 =(K[ , i],K[ , i]) = K[i, i] k · k · · where the last equality again holds by the reproducing property. Note that the K[ , i] vectors · lie in Y, since d d K[ , i]= Ke = v v∗e = v∗[i]v · i j j i j j j=1 j=1 X X and v d are defined to be a basis for Y. Thus, the scaled vectors { j}j=1 K[ , i] · , i K[i, i] ∈ I p 30 Chapter 2 : General Theory of Discrete Sampling
form an orthonormal basis of Y, and so for any f Y, ∈
K[ , i] K[ , i] f = f, · · i * K[i, i]+ K[i, i] X∈I 1 K[ , i] = p f,K[ ,p i] · h · i i K[i, i] K[i, i] X∈I 1 K[ , i] = p f[i] · p i K[i, i] K[i, i] X∈I K[ , i] = fp[i] · . p K[i, i] i X∈I
K[ ,i] Thus the column vectors · , i , constitute an orthogonal interpolating basis for Y, K[i,i] ∈ I completing the last part of the proof.
Combining Theorems 2.4.5 and 2.4.6, we can summarize the situation as follows:
Theorem 2.4.7. Let K : Cn Y. Then every orthogonal interpolating system ( , ) for → I U Y is such that ET KE is a diagonal matrix, and I I
K[ , i] u = · , i . i K[i, i] ∈ I
Figure 2.2 illustrates the problem of searching for an orthogonal interpolating system. We aim to find an sampling set , such as 2, 3, 5 in the figure, such that the corresponding I { } submatrix of K is diagonal. Then the orthogonal interpolating basis vectors would be given by the 2nd, 3rd, and 5th columns of K, each scaled up to a constant. No determinants or matrix inverses required. Some readers may be concerned that the proofs of Theorems 2.4.5 and 2.4.6 make use of projection matrices but do not explain where the projection matrices were motivated from. To address this concern, we include the following alternative proof of Theorem 2.4.6, 2.4 : Orthogonal Interpolating Systems 31
u2 u3 u5
0 0 2nd row
0 0 3rd row I 0 0 5th row
Figure 2.2: Finding diagonal submatrices in K.
using only first principles.
Theorem 2.4.8 (Reproof of Theorem 2.4.6). Let ( , ) be an interpolating system for a U I subspace Y. Then U has orthogonal columns if and only if ET (RRT )E is a diagonal I I matrix, where the columns of R are an orthonormal basis for Y.
Proof. Firstly, the theorem statement is well-defined. By Lemma 2.4.4, the projection matrix K := RRT is invariant to the choice of orthonormal basis.
T 1 Now recall that ( , ) is an interpolating system if and only if U = R(E R)− , where U I I T 1 the columns of R are a basis for Y. By Lemma 2.3.4, the matrix form R(E R)− is I invariant to the choice of basis, so we lose no generality in henceforth assuming that the columns of R are orthonormal. Let us assume the premise that the interpolating basis matrix U has orthogonal columns. 32 Chapter 2 : General Theory of Discrete Sampling
Then U T U = D, where D is a d d diagonal matrix. Then ×
D = U T U
(a) T T T T 1 = (E R)− R R(E R)− I I (b) T T T T T 1 = (E R)− (R R)(R R)(E R)− I I T T T T T 1 =(E R)− R (RR )R(E R)− I I (c) = U T RRT U
(d) = U T (UET R)(UET R)T U I I =(U T U)ET RRT E (U T U) I I (e) = DET RRT E D I I 1 (f) T T D− = E (RR )E I I where the annotated steps are explained as follows:
T 1 (a) Substitute U = R(E R)− . I (b) Since R is assumed to have orthonormal columns, RT R = I , the d d identity d × T matrix. Multiplying by Id = R R provides an opening for introducing the projection matrix RRT .
T 1 (c) Substitute U = R(E R)− . I
T T 1 (d) Substitute R = U(E R), which is implied by U = R(E R)− . I I (e) Reinvoking the premise that D = U T U.
(f) None of the diagonal entries of D can be zero, for otherwise a column of U would have zero norm, and U would not be an interpolating basis. Thus we can invert both D matrices on the right hand side of the equation. 2.4 : Orthogonal Interpolating Systems 33
1 T T Since D is diagonal, D− is also diagonal, so E (RR )E is diagonal. Since all steps are I I reversible, this completes the proof.
Lastly, while not every subspace has orthogonal interpolating systems, we have only showed this for a few examples by brute-force. The following theorems provide a larger class of examples. More theorems of this sort will appear in Chapter 3.
Theorem 2.4.9. The subspaces of Cn having an orthogonal interpolating basis are of the form Y = span e + v : i , where = dim(Y) and the v are orthogonal vectors in { i i ∈ I} |I| i ′ CI , or some possibly 0.
Proof. By Corollary 2.3.7, all subspaces and all interpolating bases are of the form de-
′ scribed. Since the vi are in CI they are orthogonal to the ei, or are zero. Clearly the ei + vi
comprise an orthogonal basis if and only if the vi are mutually orthogonal.
The following theorem characterizes all (n 1)-dimensional subspaces of Cn which − have orthogonal interpolating systems. (The complementary subspaces then have no OIS.)
Theorem 2.4.10. The (n 1)-dimensional subspaces of Cn having an OIS are of the form − Y = CI v , where = n 2 and 0 = v CJ , = [n] . ⊕{ } |I| − 6 ∈ J \ I
Note that the coordinate hyperplanes CI e , j are of this form, so these are ⊕{ j} 6∈ I included (in which case, the interpolating basis is orthonormal).
Proof. First, suppose that Y is of this form. Since v CJ we have ∈
v[i]=0
for i . Thus all but two components of v are zero, and at least one of the two remaining ∈ I components must be nonzero, say v[i ]. Let u = (1/v[i ])v. Then e : i u form 0 0 { i ∈I}∪{ } an orthogonal interpolating basis for Y = CI v indexed by ′ = i . ⊕{ } I I∪{ 0} 34 Chapter 2 : General Theory of Discrete Sampling
Next, suppose that Y is an (n 1)-dimensional subspace that has an orthogonal inter- − polating basis u : i ′ , ′ = n 1. There is a single index j [1 : n] ′ and, since { i ∈ I } |I | − 0 ∈ \ I u [j]= δ for j, k ′, the orthogonality condition implies that k jk ∈ I
uk[j0]uℓ[j0]=0
for all distinct k,ℓ ′. One possibility is that all u [j ]=0. In this case the u are the ∈ I k 0 k ′ natural basis vectors e for k ′ and Y = CI . Otherwise, there is exactly one k˜ ′ for k ∈ I ∈ I which u˜[j ] = 0, and then u = e for ℓ = ′ k˜ . We thus have Y = CI u˜ k 0 6 ℓ ℓ ∈ I I \{ } ⊕{ k} and u˜ CJ , = j , k˜ . k ∈ J { 0 } 0 In fact, since u˜[k˜]=1 we see that u˜ = e˜ + w where w CJ , = j . That is, k k k ∈ J0 { 0} Y is of the form
Y = span e : i e˜ + w , {{ i ∈I}∪{ k }} in accordance with Theorem 2.4.9.
Example Theorem 2.4.10 can be applied in the negative to demonstrate when a subspace cannot have an orthogonal interpolating basis. Let Y be the 2-dimensional subspace of R3 spanned by the orthonormal vectors
2 2 3 − 3 v = 1 , v = 2 . 1 3 2 3 2 1 3 3 The subspace is the plane x 2y +2z =0 , − −
No ei lies in the plane so there is no orthogonal interpolating basis. A nonorthogonal 2.5 : Telegraphic Summary of Chapter 2 35
interpolating basis is 1 0 u1 = 0 u2 = 1 . 1 1 2 2.5 Telegraphic Summary of Chapter 2
Framework of Discrete Sampling
Y = vector space of signals to be sampled and reconstructed • k = dim(Y) • n = length of signals in Y •
Interpolating System: A pair ( , ) where is a set of k integers and U = • I U I [u u . . . u ] is an n k matrix such that for all f Y, 1| 2| | k × ∈
f = f[i]ui. i X∈I – = sampling set I – = interpolating basis, written as U in matrix form U Orthogonal Interpolating System: An interpolating system ( , ) in which U has • I U orthogonal columns. 36 Chapter 2 : General Theory of Discrete Sampling
General Theory of Discrete Sampling
Notation
R := n k matrix whose columns are a basis for Y × K := RRT , where columns of R are an orthonormal basis for Y; K is called the repro- ducing kernel for Y
ET := 0-1 matrix that selects out rows with indices in I I
Theorems
Interpolating System Test:
( , ) is an interpolating system for Y if and only if: I U 1. ET R is invertible, and I T 1 2. U = R(E R)− . I Orthogonal Interpolating System Test:
( , ) is an orthogonal interpolating system for Y if and only if: I U
1. K[i, i′]=0 for distinct i, i′ , and ∈ I K[ ,i] 2. u = · for i . i K[i,i] ∈ I Existence of Interpolating Systems: Every vector space has an interpolating system. (However, not all spaces have an orthogonal interpolating system.) Chapter 3
Discrete Sampling In Bandlimited Spaces
E NOW EXAMINE BANDLIMITED SPACES, which are vector spaces defined with W respect to their support in the Discrete Fourier Transform (DFT) domain. Some questions we aim to answer for this class of spaces include:
1. Given a bandlimited space, compute all orthogonal interpolating systems.
2. What symmetries and relations are satisfied by (orthogonal) interpolating systems?
3. Given a fixed sampling sequence , which frequency supports can be reconstructed? I
3.1 Definition of Bandlimited Spaces
i 2π n 2π First we establish our notation. Let ω = ωn = e denote the nth root of unity exp i n , where i = √ 1. The DFT maps a vector of length n to another vector of length n via − multiplication with the Fourier matrix , which is defined by [i, j] = ωij. Writing out F F
37 38 Chapter 3 : Discrete Sampling In Bandlimited Spaces
the matrix equation,
X = x F X[0] ω0 ω0 ω0 ω0 x[0] ··· 0 1 2 n 1 X[1] ω ω ω ω − x[1] ··· 0 2 4 2(n 1) X[2] ω ω ω ω − x[2] = ··· . 0 3 6 3(n 1) X[3] ω ω ω ω − x[3] ··· . . . . . . . . . . . . . . 0 n 1 2(n 1) (n 1)2 X[n 1] ω ω − ω − ω − x[n 1] − ··· − X x F | {z } | {z } | {z } We say that X is the DFT of x. The entries of X can also be written out elementwise as summations: X[k]= x[m]ωmk. (3.1) m X The slots in the DFT domain are referred to as (discrete) frequencies. Lastly, the Inverse Discrete Fourier Transform (IDFT) of a sequence fˆ is given by
1 1 f = − fˆ. nF
1 Since the Fourier matrix is Hermitian, − = ∗. F F
n: A bandlimited space B J is parameterized by two quantities: n, the length of vectors
12: 0,1,4,5,8,9 in the space, and , the discrete frequency support. For example, B { } refers J to the set of length-12 vectors whose DFTs only have energy in slots 0,1,4,5,8, and 9. An example signal from this space is shown in Figure 3.1. By taking the signal’s DFT, which is shown in Figure 3.2, we can confirm that the frequency support is 0, 1, 4, 5, 8, 9 . A basis { } 3.1 : Definition of Bandlimited Spaces 39
Real Part Imaginary Part Amplitude Amplitude
æ æ
1 æ æ æ æ Time æ 1 2 3 4 5 6 7 8 9 10 11
æ Time -0.5 æ 1 æ2 3 4 5 6 7 8 9 10 11 æ æ æ æ , æ : æ >
-1.0 -1 æ æ æ æ
-1.5 æ -2 æ
-2.0 æ æ
12: 0,1,4,5,8,9 Figure 3.1: A bandlimited signal from B { }.
Real Part Imaginary Part Amplitude Amplitude
6 æ 2 æ æ æ
4 æ æ æ æ æ æ æ Time æ 1 2 3 4 5 6 7 8 9 10 11
2
-2
æ æ æ æ æ æ Time 1 2 3 4 5 6 7 8 9 10 11 , : >
-4 -2
æ
æ -4 -6
-6 - æ æ 8
-8 æ æ
12: 0,1,4,5,8,9 Figure 3.2: DFT of the signal from B { }.
for this space can be found by computing the IDFT of each of the six canonical vectors
e = 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 T 0 { } e = 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 T 1 { } e = 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 T 4 { } e = 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 T 5 { } e = 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 T 8 { } e = 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 T 9 { } 40 Chapter 3 : Discrete Sampling In Bandlimited Spaces
resulting in the six basis vectors
1 1 1 1 1 1 12 12 12 12 12 12 1 √6 1 1 2/3 1 5/6 √3 1 i − ( 1) ( 1) − 12 12 12 − 12 − − 12 − 12 1 √3 1 √3 1 1 2/3 1 2/3 1 12 12− 12− 12 ( 1) 12 ( 1) 12 − − − − − 1 i 1 i 1 i 12 12 12 12 12 12 1 1 2/3 1 2/3 √3 1 √3 1 1 12 12 ( 1) 12 ( 1) 12− 12− 12 − − − − 1 1 5/6 √3 1 √6 1 1 2/3 i ( 1) − − ( 1) 12 , 12 − , − 12 , 12 , 12 − , − 12 . 1 1 1 1 1 1 12 − 12 12 − 12 12 − 12 1 √6 1 1 2/3 1 5/6 √3 1 i − ( 1) ( 1) − 12 − 12 12 − − 12 − − 12 12 1 √3 1 √3 1 1 2/3 1 2/3 1 − − ( 1) ( 1) 12 − 12 − 12 12 − 12 − 12 1 i 1 i 1 i 12 − 12 12 − 12 12 − 12 1 1 2/3 1 2/3 √3 1 √3 1 1 ( 1) ( 1) − − 12 − 12 − 12 − 12 − 12 − 12 1 1 5/6 √3 1 √6 1 1 2/3 i ( 1) − − ( 1) 12 − 12 − − 12 − 12 12 − 12 By using the notation
2 n 1 k k 2k (n 1)k ω = (1,ω,ω ,...,ω − ) ω = (1,ω ,ω ,...,ω − ) we can then succinctly refer to this basis as
ωj : j { ∈J}
where in this case is 0, 1, 4, 5, 8, 9 . J { } 3.2 : Interpolating Systems 41
3.2 Interpolating Systems
We begin by refining Theorem 2.3.6 for bandlimited spaces.
T Theorem 3.2.1. BJ has as an interpolating system if and only if E E is nonsingular. I I F J
j Proof. A basis for BJ is given by R := ∗E , the matrix whose columns are ω : j F J { ∈ . By Theorem 2.3.6, in order for to reconstruct the signal, we require ET R to be J} I I nonsingular.
The fact that the Fourier matrix is Hermitian causes interpolating systems in bandlim- ited spaces to exhibit duality and symmetry.
3.2.1 Interchange Duality
Firstly, to explain what is meant by duality, consider the following two problems:
Sampling Problem: Given BJ , find . I (Given frequencies, where should we sample?)
Inverse Problem: Given , find BJ . I (Which frequencies can be reconstructed with a sampling set?)
In this sampling problem, the signal space is given in advance, and we wish to know where to sample. Such a problem occurs in spectroscopy. On the other hand, in the inverse prob- lem, we are stuck with a certain sampling sequence , and we wish to know which ban- I dlimited spaces it can reconstruct. This situation arises in analog circuits, where irregular sampling patterns can be difficult to implement. The following interchange theorem says that the sampling and inverse problems are duals – a solution to one is a solution to the other. Thus all results about interpolating 42 Chapter 3 : Discrete Sampling In Bandlimited Spaces
systems in bandlimited spaces have two interpretations, and we need only prove one of the two directions.
Theorem 3.2.2. BJ has as an interpolating system if and only if BI has as an inter- I J polating system. (The and sets can be interchanged.) I J
Proof. Since determinants are invariant under transpose, and is symmetric, F
T T T T T T det(E ∗E ) = det((E ∗E ) ) = det(E ( ∗) E ) = det(E ∗E ). I F J I F J J F I J F I
Thus,
T T det(E ∗E ) =0 det(E ∗E ) =0. I F J 6 ⇐⇒ J F I 6
3.2.2 Dihedral Symmetry
Aside from duality, interpolating systems for bandlimited spaces also exhibit dihedral sym- metry. To explain, we first introduce some notation. If is a subset of [0 : n 1], where J − n is the signal length, and τ N, then define the “rotated set” by ∈ Jτ
:= (j + τ) mod n : j . Jτ { ∈J}
We can also define the “reflected set” < as J
< := ( j) mod n : j . J { − ∈J}
These actions capture the symmetries of a regular polygon: 3.2 : Interpolating Systems 43
Theorem 3.2.3 says that is an interpolating system for the bandlimited space BJ , then I we can rotate or reflect and still have an interpolating system for the same space. And I by invoking duality, if we rotate or reflect the frequency set , then will still be an J I interpolating system for the new bandlimited space.
Theorem 3.2.3. Let and be index sets in [0 : n 1]. Then I J −
1. BJ has IB if and only if BJ has IB . I Iτ
< 2. BJ has IB if and only if BJ has IB . I I
τ 3. BJ has IB if and only if BJ has IB . I I
< 4. BJ has IB if and only if BJ has IB . I I Proof. By Theorem 3.2.2, statements (1) and (2) are equivalent to statements (3) and (4), so it suffices to only prove statements (3) and (4). By Theorem 3.2.1, statement (3) is equivalent to
T T 0 = det(E ∗E ) 0 = det(E ∗E τ ). 6 I F J ⇐⇒ 6 I F J
T ˜ T [a] [b] To argue this, define S := E ∗E and S := E ∗E τ . Then, S[a, b]= ωI J , and I F J I F J
2π [a] τ [b] i [a] τ [b] S˜[a, b]= ωI J =(e N )I J .
We now argue that we may replace the [b] in the exponent with [b]+ τ. To see this, Jτ J note that by the periodicity of the complex exponential, it suffices to argue that [a] [b] I Jτ 44 Chapter 3 : Discrete Sampling In Bandlimited Spaces
and [a]( [b]+ τ) are congruent modulo n, where [k] is the kth smallest element of . I J I I This is shown via the following set of linear congruences, all taken modulo n:
[a] [b] [a]( [b] mod n) I Jτ ≡ I Jτ [a]((( [b]+ τ) mod n) mod n) ≡ I J [a](( [b]+ τ) mod n) ≡ I J [a]( [b]+ τ). ≡ I J
In the first and last equalities, we used the fact that terms being multiplied or added in a linear congruence can always be reduced modulo n first. In the second equality, we substi- tuted the definition of , and in the third equality, we used the identity (x mod n) mod n = Jτ x mod n. Hence,
[a] τ [b] [a]( [b]+τ) τ [a] [a] [b] τ [a] S˜[a, b]= ωI J = ωI J = ω I ωI J = ω I S[a, b].
th τ [a] We can thus construct S˜ from S by scaling the a row of S by ω I . Thus,
τ [1] τ [2] τ [d] S˜ = diag ω I ,ω I ,...,ω I S, and
Pd τ [k] det S˜ = ω k=1 I det S. · Any power of ω is nonzero. Thus, det(S˜) is zero if and only if det(S) is zero, which verifies statement (3).
Moving on, statement (4) is equivalent to
T T 0 = det(E ∗E ) 0 = det(E ∗E < ). 6 I F J ⇐⇒ 6 I F J 3.2 : Interpolating Systems 45
T The proof of this statement is very similar. Again, let S := E ∗E , and define S˜ := I F J T [a] [b] E ∗E < . Thus, S[a, b]= ωI J , and I F J
[a] <[b] i 2π [a] <[b] S˜[a, b]= ωI J =(e n )I J .
We now argue that we may replace the <[b] in the exponent with [b]. By the peri- J −J odicity of the complex exponential, it suffices to argue that [a] [b] and [a] <[b] are I Jτ I J congruent modulo n.
[a] <[b] [a](( [b]) mod n) I J ≡ I −J [a](( 1 mod n)( [b] mod n) mod n) ≡ I − J [a]( [b]) ≡ I −J [a] [b]. ≡ −I J
Hence,
< ˜ [a] [b] [a] [b] [a] [b] S[a, b]= ωI J = ω−I J = ωI J = S[a, b].
Thus, det(S˜) = det(S)= det S
and det(S˜) is zero if and only if det(S) is zero, verifying statement (4).
3.2.3 Building a Sampling Dictionary
To demonstrate Theorems 3.2.2 and 3.2.3, consider the problem of building a sampling dictionary for all possible bandlimited subspaces of Cn. This dictionary is a table with three columns:
1. frequencies (the bandlimited space BJ ), 46 Chapter 3 : Discrete Sampling In Bandlimited Spaces
2. interpolating systems (sampling sets that can reconstruct signals from BJ ), and
3. non-interpolating systems (sampling sets that cannot reconstruct signals from BJ ).
Without exploiting symmetry, the sampling dictionary for Cn would have 2n 1 rows – − one for every possible bandlimited subspace, not including the empty subspace. If the dimensionality for a bandlimited subspace is d, then the corresponding row of the dic- n tionary would list d different sampling sets, including both the interpolating and non- interpolating ones. Thus the number of sets listed in this dictionary would be
n n 2 2n = d n Xd=1 where the equality follows from Vandermonde’s convolution formula. On the other hand, by invoking dihedral symmetry and duality, we can compress the sampling dictionary. If and ′ are equivalent modulo the dihedral group, then the rows J J ′ for BJ and BJ will yield identical results, so we need only consider one of these spaces. Figure 3.1 shows such a compressed dictionary, for n = 6. Note that not all possible frequency sets are listed in the frequencies column. We need only list one representative from each equivalence class modulo the dihedral group. For example, the set 0, 1, 3 , highlighted in Table 3.1, encodes all of the sets shown { } in Table 3.2. A natural combinatorial structure describing these dihedral symmetries is a bracelet with n beads, each of which may be painted black or white. The color black indicates that the corresponding frequency is present, and white indicates the frequency’s absence. We can encode a bracelet numerically by only listing the locations of the black beads. In this manner, we can reduce the number of rows from 26 1 = 63 down to 11, the − number of distinct black and white bracelets with 6 beads. In general, the number of rows 3.2 : Interpolating Systems 47
frequencies interpolating systems non-interpolating systems 0 0 { } { } {} 0,1 0,1 , 0,2 , 0,3 { } { } { } { } {} 0,2 0,1 , 0,2 0,3 { } { } { } { } 0,3 0,1 , 0,3 0,2 { } { } { } { } 0,1,2 0,1,2 , 0,1,3 , 0,2,4 { } { } { } { } {} 0,1,3 0,1,2 , 0,1,3 0,2,4 { } { } { } { } 0,2,4 0,1,2 , 0,2,4 0,1,3 { } { } { } { } 0,1,2,3 0,1,2,3 , 0,1,2,4 , 0,1,3,4 { } { } { } { } {} 0,1,2,4 0,1,2,3 , 0,1,2,4 0,1,3,4 { } { } { } { } 0,1,3,4 0,1,2,3 , 0,1,3,4 0,1,2,4 { } { } { } { } 0,1,2,3,4 0,1,2,3,4 { } { } {} 0,1,2,3,4,5 0,1,2,3,4,5 { } { } {}
Table 3.1: Sampling dictionary for bandlimited signals of length n =6.
0,1,3 0,1,3 , 1,2,4 , 2,3,5 , 3,4,0 , 4,5,1 , 5,0,2 { } {0,2,3},{1,3,4},{2,4,5},{3,5,0},{4,0,1},{5,1,2} { } { } { } { } { } { }
Table 3.2: Equivalence class represented by six-bead black and white bracelet 0, 1, 3 . { }
in the dictionary is given by the following theorem:
Theorem 3.2.4 (Bracelet Count). When n is odd, the number of distinct length-n black and white bracelets is
1 n n−1 b(n) := φ(d)2 d +2 2 . 2n d n X| 48 Chapter 3 : Discrete Sampling In Bandlimited Spaces
When n is an odd prime, this simplifies to
1 n 1 n−1 b(n) := 2 − +2 2 + n 1 . n − When n is even,
1 n 3 n/2 b(n) := φ(d)2 d + 2 . 2n 4 · d n X| Proof. Basic group theory and Polya’s Enumeration Theorem. See Appendix C.
Although b(n) still grows asymptotically like 2n, bracelets are very helpful in displaying dictionaries for small values of n, for theoretical research. The enumeration of bracelets is an interesting research area in combinatorial group theory. Sawada’s algorithm [3] can be used to generate these bracelets in constant amortized time.
Going further, we can compress the entries in each row. If is a (non-) interpolating I system for BJ , then any rotation or reflection of is also a (non-) interpolating system for I that space, so these relatives of need not be listed. If a bandlimited space has d frequen- I cies, then its corresponding row in the compressed dictionary will list b(n, d) sampling sets, where b(n, d) is the number of black and white bracelets with exactly d black beads. The following theorem gives explicit formulas for b(n, d):
Theorem 3.2.5. When n is odd, the number of distinct length-n black and white bracelets with exactly w black beads is
n−1 φ(d) n/d 1 2 1 for even 2 w/2 + 2n d n d w n w/d w b(n, w)= n−1 | ∧ | 1 2 1 φ(d) n/d 2 w−1 + 2n Pd n d w n w/d for odd w. 2 | ∧ | P 3.2 : Interpolating Systems 49
When n is even,
1 n/2 1 φ(d) n/d 2 w/2 + 2n d n d w n w/d for even w b(n, w)= | ∧ | 1 n/2 1 1 φ(d) n/d − 2 w−1 + 2Pn d n d w n w/d for odd w. 2 | ∧ | P (Note: The notation of w is chosen here to suggest “weight”.)
Proof. See Appendix C.
n 2 The total number of sets listed in the compressed dictionary is then d=1 b(n, w) . These formulas are useful for calculating in advance the memory and diskP space required to compute and store a sampling dictionary. The author is unaware of any efficient algorithms for generating bracelets of fixed weight. However, Ruskey and Sawada have an efficient algorithm for generating neck- laces of fixed weight [4], where a necklace is a string of black and white beads that can be rotated but not flipped over (replace the dihedral group with the cyclic group). This is the reason why all sampling dictionaries shown henceforth will only employ necklace com- pression, rather than bracelet compression. Table 3.3 shows another compressed sampling dictionary, for n =8. Lastly, a demonstration of duality. The dictionaries shown thus far assume that one knows what frequencies are present, but does not know where to sample. We can also construct a dictionary for the reverse problem, in which we have already fixed our sampling set, but want to know which bandlimited spaces can be reconstructed with it. Theorem 3.2.2 says that this reverse dictionary is identical to the original dictionary – the only difference is the headers atop each column, as shown in Table 3.4. 50 Chapter 3 : Discrete Sampling In Bandlimited Spaces
frequencies interpolating systems non-interpolating systems 0 0 {0,1} 0,1 , 0,2{, }0,3 , 0,4 {} {0,2} { }0,1{ , }0,2{ , }0,3{ } 0,4{} {0,3} 0,1{ , }0,2{ , }0,3{ , }0,4 { } {0,4} { } {0,1}, {0,3} { } 0,2 {}, 0,4 { } { } { } { } { } 0,1,2 0,1,2 , 0,1,3 , 0,1,4 , 0,2,3 , { } { 0,2,4} { , 0,2,5} { , 0,3,4} { } {} 0,1,3 0,1,2{ , 0,1,3} { , 0,1,4} { , 0,2,3} , { } { 0,2,4} { , 0,2,5} { , 0,3,4} { } {} 0,1,4 {0,1,2}, {0,1,3}, {0,1,4 }, 0,2,4 { } {0,2,3}, {0,2,5},{ 0,3,4} { } 0,2,3 0,1,2{ , 0,1,3} { , 0,1,4} { , 0,2,3} , { } { 0,2,4} { , 0,2,5} { , 0,3,4} { } {} 0,2,4 0,1,2{ , 0,1,3} { , 0,2,3} { , 0,2,5} 0,1,4 , 0,2,4 , 0,3,4 {0,2,5} {0,1,2}, {0,1,3 }, {0,1,4}, {0,2,3}, { } { } { } { } { 0,2,4} { , 0,2,5} { , 0,3,4} { } {} 0,3,4 {0,1,2}, {0,1,3}, {0,1,4 }, 0,2,4 { } {0,2,3}, {0,2,5},{ 0,3,4} { } 0,1,2,3 0,1,2,3 , 0,1,2,4{ }, {0,1,2,5} ,{ 0,1,3,4} , 0,1,3,5 , { } {0,1,4,5},{ 0,2,3,4} ,{0,2,3,5},{0,2,4,5}, {0,2,4,6} {} 0,1,2,4 { 0,1,2,3} { , 0,1,2,4} { , 0,1,2,5} { , 0,1,3,4} { , } 0,1,4,5 , 0,2,4,6 { } {0,1,3,5},{ 0,2,3,4} ,{0,2,3,5},{ 0,2,4,5} { } { } 0,1,2,5 {0,1,2,3}, {0,1,2,4 }, {0,1,2,5}, {0,1,3,5 }, 0,1,3,4 , 0,2,4,6 { } {0,1,4,5},{ 0,2,3,4} ,{0,2,3,5},{ 0,2,4,5} { } { } 0,1,3,4 {0,1,2,3}, {0,1,2,4 }, {0,1,3,4}, {0,1,3,5 }, 0,1,2,5 , 0,2,4,6 { } {0,1,4,5},{ 0,2,3,4} ,{0,2,3,5},{ 0,2,4,5} { } { } { } { } { } { } 0,1,3,5 0,1,2,3 , 0,1,2,4 , 0,1,2,5 , 0,1,3,4 , 0,1,4,5 , 0,2,4,6 { } {0,1,3,5},{ 0,2,3,4} ,{0,2,3,5},{ 0,2,4,5} { } { } { } { } { } { } 0,1,4,5 0,1,2,3 , 0,1,2,5 , 0,1,3,4 , 0,1,2,4 , 0,1,3,5 , 0,2,3,4 , { } { 0,1,4,5} { , 0,2,3,5} { } { 0,2,4,5} { , 0,2,4,6} { } { } { } { } { } 0,2,3,4 0,1,2,3 , 0,1,2,4 , 0,1,2,5 , 0,1,3,4 , 0,1,4,5 , 0,2,4,6 { } {0,1,3,5},{ 0,2,3,4} ,{0,2,3,5},{ 0,2,4,5} { } { } { } { } { } { } 0,2,3,5 0,1,2,3 , 0,1,2,4 , 0,1,2,5 , 0,1,3,4 , 0,1,3,5 , { } {0,1,4,5},{ 0,2,3,4} ,{0,2,3,5},{0,2,4,5}, {0,2,4,6} {} 0,2,4,5 { 0,1,2,3} { , 0,1,2,4} { , 0,1,2,5} { , 0,1,3,4} { , } 0,1,4,5 , 0,2,4,6 { } {0,1,3,5},{ 0,2,3,4} ,{0,2,3,5},{ 0,2,4,5} { } { } 0,2,4,6 { } {0,1,2,3}, {0,2,3,5} { } 0,1,2,4 , 0,1,2,5 , 0,1,3,4 , 0,1,3,5 , { } { } { } { 0,1,4,5} ,{0,2,3,4},{ 0,2,4,5} ,{0,2,4,6} 0,1,2,3,4 0,1,2,3,4 , 0,1,2,3,5 , 0,1,2,4,5 , 0,1,2,4,6 , { } { } { } { } { } { 0,1,3,4,5} { , 0,1,3,4,6} { , 0,2,3,4,5} { } {} 0,1,2,3,5 0,1,2,3,4{ , 0,1,2,3,5} { , 0,1,2,4,5} { , 0,1,2,4,6} , { } { 0,1,3,4,5} { , 0,1,3,4,6} { , 0,2,3,4,5} { } {} 0,1,2,4,5 {0,1,2,3,4}, {0,1,2,3,5}, {0,1,2,4,5}, 0,1,2,4,6 { } {0,1,3,4,5},{0,1,3,4,6},{0,2,3,4,5} { } 0,1,2,4,6 0,1,2,3,4{ , 0,1,2,3,5} { , 0,1,3,4,6} { , 0,2,3,4,5} 0,1,2,4,5 , 0,1,2,4,6 , 0,1,3,4,5 {0,1,3,4,5} { 0,1,2,3,4} { , 0,1,2,3,5} { , 0,1,2,4,5} { , } { } {0,1,2,4,6} { } { } {0,1,3,4,5},{0,1,3,4,6},{0,2,3,4,5} { } 0,1,3,4,6 0,1,2,3,4{ , 0,1,2,3,5} { , 0,1,2,4,5} { , 0,1,2,4,6} , { } { 0,1,3,4,5} { , 0,1,3,4,6} { , 0,2,3,4,5} { } {} 0,2,3,4,5 0,1,2,3,4{ , 0,1,2,3,5} { , 0,1,2,4,5} { , 0,1,2,4,6} , { } { 0,1,3,4,5} { , 0,1,3,4,6} { , 0,2,3,4,5} { } {} 0,1,2,3,4,5 0,1,2,3,4,5{ , 0,1,2,3,4,6} { , 0,1,2,3,5,6} { , 0,1,2,4,5,6} { } { } { } { } { } {} 0,1,2,3,4,6 0,1,2,3,4,5 , 0,1,2,3,4,6 , 0,1,2,3,5,6 0,1,2,4,5,6 {0,1,2,3,5,6} 0,1,2,3,4,5{ , 0,1,2,3,4,6} { , 0,1,2,3,5,6} { , 0,1,2,4,5,6} { } {0,1,2,4,5,6} { } {0,1,2,3,4,5}, {0,1,2,3,5,6} { } 0,1,2,3,4,6 {}, 0,1,2,4,5,6 {0,1,2,3,4,5,6} { 0,1,2,3,4,5,6} { } { } { } {0,1,2,3,4,5,6,7} {0,1,2,3,4,5,6,7} {} { } { } {}
Table 3.3: Sampling dictionary for bandlimited signals of length n =8. 3.2 : Interpolating Systems 51
frequencies interpolating systems non-interpolating systems interpolating systems frequencies non-reconstructible frequencies 0 0 { } { } {} 0,1 0,1 , 0,2 , 0,3 { } { } { } { } {} 0,2 0,1 , 0,2 0,3 { } { } { } { } 0,3 0,1 , 0,3 0,2 { } { } { } { } 0,1,2 0,1,2 , 0,1,3 , 0,2,4 { } { } { } { } {} 0,1,3 0,1,2 , 0,1,3 0,2,4 { } { } { } { } 0,2,4 0,1,2 , 0,2,4 0,1,3 { } { } { } { } 0,1,2,3 0,1,2,3 , 0,1,2,4 , 0,1,3,4 { } { } { } { } {} 0,1,2,4 0,1,2,3 , 0,1,2,4 0,1,3,4 { } { } { } { } 0,1,3,4 0,1,2,3 , 0,1,3,4 0,1,2,4 { } { } { } { } 0,1,2,3,4 0,1,2,3,4 { } { } {} 0,1,2,3,4,5 0,1,2,3,4,5 { } { } {}
Table 3.4: Inverse dictionary for bandlimited signals of length n =6. 52 Chapter 3 : Discrete Sampling In Bandlimited Spaces
sampling dictionary for n = 16 frequencies interpolating system non-IS ...... 0,2,4,6 0,2,4,6 , 0,4,6,10 ... { ... } { }...{ } ...
⇓ ⇑ sampling dictionary for n = 8 frequencies interpolating system non-IS ...... 0,2,4,6 0,1,2,3 , 0,2,3,5 ... { ... } { }...{ } ...
Figure 3.3: Lifting and dropping of interpolating systems between dictionaries.
3.2.4 Lifting and Dropping
Rows of a sampling dictionary for bandlimited signals of length n1 can be exported into the sampling dictionary for signals of length n n , as long as the proper upsampling is 1 · 2 performed. For example, in Figure 3.3, we see that = 0, 2, 3, 5 is an interpolating I { } 8: 0,2,4,6 system for B { }. By upsampling by a factor of 2, we can assert that 0, 4, 6, 10 is I { } 16: 0,2,4,6 an interpolating system for B { }. The following theorem makes this idea precise.
n1 Theorem 3.2.6. Let n , n N. Then is an IB for BJ C if and only if ′ is an IB 1 2 ∈ I ⊆ I ′ n1n2 for BJ C , under any of the following choices for ′ and ′: ⊂ I J 1. (Upsample in time)
′ = n = n i i , and ′ = . I 2I { 2 · | ∈ I} J J 2. (Upsample in frequency) 3.2 : Interpolating Systems 53
′ = , and ′ = n . I I J 2J 3. (Upsample in time and frequency)
Suppose n = ab, and a [i] for all i, and b [j] for all j. 2 | I |J
Then ′ = a and ′ = b . I I J J
Proof. Let S be a d d submatrix of ∗ , with rows drawn from and columns from : × F n1 I J
S[i, j]= ∗ [i, j] : i , j . {F n1 ∈ I ∈J}
(n1n2) The aim is to find index sets ′ and ′ that define a d d submatrix S′ of , I J × F
S′[i, j]= ∗ [i, j] : i ′, j ′ , {F n1n2 ∈ I ∈J } such that the entries of S′ are identical to those of S. That is,
π π 2 i 2 [i] [j] i 2 ′[i] ′[j] i, j [d] : ∗ [i, j]=(e n1 )I J =(e n1n2 )I J = ∗ [i, j]. ∀ ∈ F n1 F n1n2
This implies that
2 i, j [d] : n [i] [j]= ′[i] ′[j]. ∀ ∈ 2I J I J
We can have ′ = n , or ′ = n , or we can split factors of m between ′ and ′, I 2I J 2J I J assuming that the divisibility conditions stipulated in the theorem hold. 54 Chapter 3 : Discrete Sampling In Bandlimited Spaces
3.3 Orthogonal Interpolating Systems
3.3.1 The Circulant Projection Matrix
Many properties of orthogonal interpolating systems in bandlimited spaces depend on the following fact.
Theorem 3.3.1. Y is a bandlimited space if and only if the projection matrix K : Cn Y → is circulant.
Proof. For brevity of presentation, let us rescale the Fourier matrix such that denotes the F matrix:
1 i 2π ij = e− n . F ij √n
This convention has the advantage that ∗ = I. F F Firstly, suppose Y is a bandlimited space BJ , for some . This space is equipped J 1 j j with the orthonormal basis √n j ω (ω )∗ : j , so we can express the projection { ∈J ∈J} matrix as P 1 k k K = ω (ω )∗. n k X∈J Expanding this,
T K[i, j] = ei Kej
1 T k k = (e ω )((ω )∗e ) n i j k X∈J 1 = ωk[i]ωk[j] n k X∈J 1 k(i j) = ω − . n k X∈J Thus K is circulant. 3.3 : Orthogonal Interpolating Systems 55
For the converse, suppose that K is circulant. Then it is diagonalized by the Fourier matrix, and there exists Λ such that
K = Λ ∗. F F
Since K is also a projection matrix, we also require K2 = K:
K2 = K
( Λ ∗)( Λ ∗)= Λ ∗ F F F F F F Λ( ∗ )Λ ∗ = Λ ∗ F F F F F F 2 Λ ∗ = Λ ∗ F F F F Λ2 =Λ
th which implies that if λi is the i entry along the diagonal of Λ, then
λ2 = λ , i [0 : n 1]. i i ∈ −
For each i, this equation can only hold true if either λ = 0, or λ = 1. Thus λ 0, 1 , i i i ∈ { } and
n 1 − ωj ωj ∗ ωj ωj ∗ K = Λ ∗ = λ = F F j √n √n √n √n j=0 j X X∈J where we define := j [0 : n 1]: λ =1 . However, this is precisely the projection J { ∈ − j } ωj matrix for the bandlimited subspace BJ , constructed using the orthonormal basis : j { √n ∈ . J}
Theorem 3.3.1 has several immediate consequences. Firstly, recall from Theorem 2.4.7 56 Chapter 3 : Discrete Sampling In Bandlimited Spaces
that all orthogonal interpolating bases consist of columns from K. Since in a bandlimited space, K is circulant, we have the following corollary:
Corollary 3.3.2. In a bandlimited space, any orthogonal interpolating basis u : i { i ∈ I} consists of rotationally equivalent vectors. These vectors are all rotations of h, the first column of K.
Finding an OIS then reduces to finding a collection of d shifts, such that rotating h by each of these amounts produces d orthogonal basis vectors. Two vectors whose entries are equivalent up to rotation have the same norm. Thus, if we have an orthogonal interpolating basis, Parseval’s theorem holds up to a constant:
Corollary 3.3.3 (Parseval’s Theorem, Up To A Constant). Suppose ( , ) is an orthogonal I U interpolating system for a bandlimited space. Then
f 2 = λ2 f(i) 2 k k2 | | i X∈I where λ = u for all i . k ik2 ∈ I
Proof. Since ( , ) is an OIS, I U f = f[i]ui, i X∈I where the ui’s are orthogonal. Taking the inner product of f with itself,
f 2 = f(i) 2 u 2. k k2 | | k ik2 i X∈I
Normally, we would have to stop here. However, since in a bandlimited space, the ui’s are drawn from the columns of a circulant matrix, they all have the same norm: λ := u , k ik2 3.3 : Orthogonal Interpolating Systems 57
for all i . Therefore, ∈ I f 2 = λ2 f(i) 2. k k2 | | i X∈I
Theorem 3.3.1 will allow us to find orthogonal interpolating systems more efficiently. Normally, to find an OIS, we must search the entire n n matrix K for a d d diagonal × × submatrix. However, since a circulant matrix is fully specified by its first column, we should be able to calculate an OIS by examining only the first column of K. Henceforth, we will denote the first column of K by h. There is a simple formula for h – it is the inverse DFT of λ, the eigenvalue sequence of K. In fact, this is true for all circulant matrices:
n 1 Lemma 3.3.4. Let Λ= diag (λ ) − . Then a matrix Q satisfies Q = ∗Λ if and only i i=0 F F if Q is a circulant matrix; that is, Q[a, b]= Q[a b, 0]. Furthermore, the first column of Q − is given by n times the IDFT of the eigenvalue sequence (λi).
Proof. A proof of this well-known fact can be found in Appendix D.
The eigenvalues of K are easy to describe. Since K is the projection matrix for a ωj bandlimited space BJ , we can use the orthonormal basis : j to construct K as { √n ∈J}
1 1 ∗ K = ωk ωk √n √n j ∈J Xn 1 k k = 1[j ](ω )(ω )∗ n ∈J j=1 X = Λ ∗ F J F
1 where Λ is the diagonal matrix such that Λ [j, j]= 1[j ]. Now applying Lemma D.0.7, J J n ∈J we have the corollary: 58 Chapter 3 : Discrete Sampling In Bandlimited Spaces
Corollary 3.3.5. Let K : Cn Y. Then the eigenvalue sequence of K is the characteristic → sequence 1[j ], defined by ∈J 1 j 1[j ] = ∈J ∈J 0 j 6∈ J 12: 0,1,4,5,8,9 For example, the projection matrix for B { } has eigenvalues
1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0 . { } and the corresponding first column of K is given by the IDFT of this sequence:
h = 1/2, 0, 0, 1/4+ i/4, 0, 0, 0, 0, 0, 1/4 i/4, 0, 0 . { − }
We now focus on how h should be processed to calculate the OISes. As illustrated in Figure 3.4, if there exists a diagonal submatrix in K, then its off-diagonal zeros will propagate back to h via circulancy. So K[a, b]=0 if and only if h[a b] := K[a b, 0]=0. − − This motivates the notion of a difference set:
Definition 3.3.6. Given a set [0 : n 1], define its difference set to be I ⊆ −
∆ := i i′ (mod n) : i, i′ ; i = i′ . I { − ∈ I 6 }
Then the following theorem, which refines Theorem 2.4.7 in the context of bandlimited spaces, is almost immediate:
Theorem 3.3.7. BJ has as an orthogonal interpolating system if and only if I
m ∆ : 0= h[m]. ∀ ∈ I 3.3 : Orthogonal Interpolating Systems 59
0 0 circulancy 0 0
0 0
Figure 3.4: Off-diagonal zeros pulled back to first column by circulancy.
Expanding out h using Equation 3.1, this condition can alternatively be written as
m ∆ : 0= ωjm. ∀ ∈ I j X∈J It can also be expressed in matrix form:
T E∆ ∗E 1 =0 I F J
where 0 is the all zero vector, of length ∆ . | I|
Proof. By Theorem 2.4.7, is an orthogonal interpolating system if and only if ET KE I I I is diagonal, or equivalently, if and only if K[i , i ]=0 for distinct i , i . Since K is 1 2 1 2 ∈ I circulant for a bandlimited space, this translates to K[i i ]= h[i i ]=0. Using the 1 − 2 1 − 2 notation of difference sets, this is equivalent to h[δ ]=0 for δ ∆ . i i ∈ I 60 Chapter 3 : Discrete Sampling In Bandlimited Spaces
3.3.2 Dihedral Symmetry
Suppose is an interpolating system for BJ . Previously, we learned that rotations and I reflections of either or do not affect this relationship (Theorem 3.2.3). We are now I J equipped to strengthen this result: orthogonality, or the lack thereof, is also preserved by rotations and reflections.
Theorem 3.3.8 (Orthogonality and Dihedral Symmetry). Let and be index sets in I J [0 : n 1]. Then −
τ 1. BJ has as an OIS if and only if BJ has as an OIS. I I
2. BJ has as an OIS if and only if BJ has as an OIS. I Iτ
< 3. BJ has as an OIS if and only if BJ has as an OIS. I I
< 4. BJ has as an OIS if and only if BJ has as an OIS. I I
Proof. To show the first statement, suppose BJ has as an OIS. Then by the necessity I direction of Theorem 3.3.7,
jm 2πjm 0= ω = eI n , m ∆ . ∀ ∈ I j j X∈J X∈J This is a system of ∆ homogenous equations. Multiplying each of these equations by | I| 2πmτ eI n ,
2π(jm+mτ) 2πm(j+τ) 2πm((j+τ) mod n) 2πm˜j 0= eI n = eI n = eI n = eI n , m ∆ . ∀ ∈ I j j j ˜j τ X∈J X∈J X∈J X∈J
τ Now by the sufficiency direction of Theorem 3.3.7, BJ has an OIS with the same sampling index set . I 3.3 : Orthogonal Interpolating Systems 61
To prove the second statement,
∆( )= (˜i ˜i ) mod n : ˜i ,˜i ,˜i = ˜i Iτ { 2 − 1 1 2 ∈ Iτ 2 6 1} = ((i + τ) mod n (i + τ) mod n) mod n : i , i , i = i { 2 − 1 1 2 ∈ I 2 6 1} = (i + τ i τ) mod n : i , i , i = i { 2 − 1 − 1 2 ∈ I 2 6 1} = (i i ) mod n : i , i , i = i { 2 − 1 1 2 ∈ I 2 6 1} = ∆ . I
Now apply Theorem 3.3.7.
We now move on to the reflection statements, which are proved in a similar fashion. For the third statement, suppose BJ has as an OIS. By the necessity direction of Theorem I 3.3.7,
2πjm 0= eI n , m ∆ . ∀ ∈ I j X∈J This is a system of ∆ homogenous equations. Conjugating each of these equations, | I|
2πjm 2πjm 2π(−j)m 2π(−j mod n)m 2π˜jm 0= eI n = e−I n = eI n = eI n = eI n . j j j j ˜j < X∈J X∈J X∈J X∈J X∈J
τ By the sufficiency direction of Theorem 3.3.7, BJ has an OIS with sampling index set . I 62 Chapter 3 : Discrete Sampling In Bandlimited Spaces
Lastly, for the fourth statement,
∆( <)= (˜i ˜i ) mod n : ˜i ,˜i <,˜i = ˜i I { 2 − 1 1 2 ∈ I 2 6 1} = (( i mod n) ( i mod n)) mod n : i , i , i = i { − 2 − − 1 1 2 ∈ I 2 6 1} = (i i ) mod n : i , i , i = i { 1 − 2 1 2 ∈ I 2 6 1} = (i i ) mod n : i , i , i = i { 2 − 1 1 2 ∈ I 2 6 1} = ∆ . I
Now apply Theorem 3.3.7.
3.3.3 Cliques in Difference Graphs
Our aim is to find every OIS for a given bandlimited space. This amounts to identifying all sets such that h[m]=0 for m ∆ . Figure 3.5 illustrates the computational difficulties I ∈ I here. Since our view has been restricted to the first column, the only thing we can do is identify the zeros in h, which we denote by Z := z : h[z]=0 . If one guesses that = { } I a, b, c is an orthogonal interpolating system, this can be quickly verified by computing { } ∆ and determining if ∆ is a subset of Z (e.g., h[a b]=0, h[b c]=0, h[a c]=0). I I − − − However, how does one efficiently generate sets to guess in the first place? Simply I n iterating through all d possible sampling sets would be no better than searching the entire matrix K for a diagonal submatrix.
Symmetry comes to our rescue here. By Theorem 3.3.8, if BJ has as an OIS, then I it also has as an OIS. Therefore, we may assume without loss of generality that the Iτ sampling set has been rotated to always include zero. Define × := 0 . Then the I I I\{ } remaining d 1 unknown sampling locations is precisely the set ×. Then, by definition − I 3.3 : Orthogonal Interpolating Systems 63
I
ab c h
a 0 0 circulancy b 0 0 c−a
c 0 0
K[c,a] = 0 K[c−a, 0] = 0
Figure 3.5: Detailed view of zeros pulled back to first column.
of the difference set,
× ∆ . I ⊆ I
To see this, observe that ∆ consists of the differences i i′ between all possible pairs I − i, i′ , so all nonzero elements of will be reappear in ∆ when i′ =0. And since ∆ ∈ I I I I is a subset of Z, we can summarize the situation as follows:
Lemma 3.3.9. If is an OIS for a bandlimited space, then we may assume without loss of I generality that 0 , so that ∈ I × ∆ Z I ⊆ I ⊆ where Z := z : h[z]=0 , the zero set of h. { } 64 Chapter 3 : Discrete Sampling In Bandlimited Spaces
By Lemma 3.3.9, all elements of × may be assumed to lie amongst Z. Therefore, by I Theorem 3.3.7, we only need to find a d-subset 0 Z such that ∆ Z. This canbe I\{ } ⊆ I ⊆ interpreted as a graph theory problem. For a satisfactory × Z, every pair i, i′ must I ⊆ ∈ I satisfy i i′ Z. Such a constraint is analogous to that of a clique: a subset of vertices in − ∈ a graph such that every two vertices have a direct edge between them. To make the analogy exact, consider constructing a undirected graph using Z as the vertex set, such that there is
1 an edge (i, i′) if and only if i i′ is also a vertex. Then is an orthogonal interpolating − I system if and only if × is a clique in this graph. We summarize this as follows: I
Theorem 3.3.10. Given a bandlimited space BJ , let Z = z : h[z]=0 , where h = { } IDFT (1[j ]). Construct the difference graph G =(V, E) where V = Z, and (i, i′) E ∈J ∈ if and only if i i′ V . Then is an orthogonal interpolating system if and only if × is − ∈ I I a clique in G.
Example We wish to find orthogonal interpolating systems for the bandlimited space
8 0,1,4,5 B { }.
1. The characteristic sequence for the frequencies is
1[j ] = 1, 1, 0, 0, 1, 1, 0, 0 . ∈J { }
2. The IDFT of this sequence is
h = 1/2, 0, 1/4+ /4, 0, 0, 0, 1/4 /4, 0 . { I − I }
3. Zero set is Z = 1, 3, 4, 5, 7 . { } 1This edge law is well-defined because Z is symmetric: h[z]=0 h[ z]=0, where z is interpreted modulo n. Therefore i i′ Z if and only i′ i Z. The symmetry⇐⇒ of Z−is a consequence− of the fact that − ∈ − ∈ 1[j∈J ] is a real sequence, and the IDFT maps real sequences to symmetric sequences. 3.3 : Orthogonal Interpolating Systems 65
4. We now construct the difference graph G, shown in Figure 3.6. There are five ver- tices, one for each element of Z. An edge is drawn between two vertices if and only if their difference is also a vertex. For example, an edge exists between 1 and 5 be- cause 5 1=4 is a vertex. However, there is no edge between 1 and 3 because − 3 1=2 is not a vertex. −
3 1
4
7 5
8: 0,1,4,5 Figure 3.6: Difference graph for B { }.
5. The dimension of BJ is d =4. Thus × = d 1=3, and we must find cliques of |I | − size 3 in the graph. Here, the 3-cliques are the triangles 1,4,5 and 3,4,7 . { } { } 6. By appending a zero to each of these sets, we have found all orthogonal interpolating systems for this space:
= 0, 1, 4, 5 , = 0, 3, 4, 7 . I1 { } I2 { }
7. Lastly, we can verify from first principles that these are orthogonal interpolating
8: 0,1,4,5 systems. The projection matrix for B { } is displayed twice in Figure 3.7, with the submatrices corresponding to and highlighted in pink. We see that these I1 I2 submatrices are diagonal, as required by Theorem 2.4.7. 66 Chapter 3 : Discrete Sampling In Bandlimited Spaces
0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 ä ä ä ä 0 1 0 1 - 0 0 0 1 + 0 0 1 0 1 - 0 0 0 1 + 0 2 4 4 4 4 2 4 4 4 4 ä ä ä ä 1 0 1 0 1 - 0 0 0 1 + 1 0 1 0 1 - 0 0 0 1 + 2 4 4 4 4 2 4 4 4 4 ä ä ä ä 2 1 + 0 1 0 1 - 0 0 0 2 1 + 0 1 0 1 - 0 0 0 4 4 2 4 4 4 4 2 4 4 ä ä ä ä 3 0 1 + 0 1 0 1 - 0 0 3 0 1 + 0 1 0 1 - 0 0 4 4 2 4 4 4 4 2 4 4 ä ä ä ä 4 0 0 1 + 0 1 0 1 - 0 4 0 0 1 + 0 1 0 1 - 0 4 4 2 4 4 4 4 2 4 4 ä ä ä ä 5 0 0 0 1 + 0 1 0 1 - 5 0 0 0 1 + 0 1 0 1 - 4 4 2 4 4 4 4 2 4 4 ä ä ä ä 6 1 - 0 0 0 1 + 0 1 0 6 1 - 0 0 0 1 + 0 1 0 4 4 4 4 2 4 4 4 4 2 ä ä ä ä 7 0 1 - 0 0 0 1 + 0 1 7 0 1 - 0 0 0 1 + 0 1 4 4 4 4 2 4 4 4 4 2
8: 0,1,4,5 Figure 3.7: Projection matrix for B { }, with two OISes highlighted.
3.3.4 Filtering Out Equivalent Sampling Sets
In Example 3.3.3, the orthogonal interpolating systems = 0, 1, 4, 5 and = 0, 3, 4, 7 I1 { } I2 { } are rotationally equivalent: +1mod8. This equivalence is shown visually in Fig- I1 ≡ I2 ure 3.8. By Theorem 3.3.8, we could have deduced from alone. Thus, it is redundant I2 I1 to list both and as solutions. So after finding all (d 1)-cliques and appending 0 to I1 I2 − each, we should list only sets that are not dihedrally equivalent to each other. We first consider the problem of determining if two sets are rotationally equivalent. Here we use a method devised by J.P. Duval. Suppose we are given a set [n], where I ⊆ = d. The Duval signature of , denoted by duval ( ), is constructed as follows: |I| I I
1. Compute differences of adjacent elements in , including the wrap-around. I
2. Concatenate the difference sequence with itself.
3. Scan the new sequence for the lexicographically smallest substring of length d. This is the Duval signature. 3.3 : Orthogonal Interpolating Systems 67
Figure 3.8: 1 = 0, 1, 4, 5 and 2 = 0, 3, 4, 7 are rotationally Iequivalent.{ } I { }
Example Let us compute the Duval signature for = 0, 1, 5, 12 when n = 16. I { }
1. The difference sequence is 1, 4, 7, 4 , since { }
1 0=1 − 5 1=4 − 12 5=7 − 16 12=4. −
2. Concatenating the difference sequence with itself yields 1, 4, 7, 4, 1, 4, 7, 4 . { } 3. Scanning the new sequence from left to right, we will see the following substrings of length 4, in order:
(a) 1,4,7,4 , 1, 4, 7, 4 1, 4, 7, 4 { }→{ } (b) 1, 4,7,4,1 , 4, 7, 4 4, 7, 4, 1 { }→{ } 68 Chapter 3 : Discrete Sampling In Bandlimited Spaces
(c) 1, 4, 7,4,1,4 , 7, 4 7, 4, 1, 4 { }→{ } (d) 1, 4, 7, 4,1,4,7 , 4 4, 1, 4, 7 { }→{ } (e) 1, 4, 7, 4, 1,4,7,4 1, 4, 7, 4 { }→{ }
The lexicographic minimum, which can be determined in one pass, is 1, 4, 7, 4 . { } Thus the Duval signature for n = 16 and 0, 1, 5, 12 is 1, 4, 7, 4 . { } { }
Example We now compute another Duval signature, this time for = 0, 4, 5, 9 . n is I { } still 16.
1. Difference sequence: 4, 1, 4, 7 { }
2. Self-concatenation: 4, 1, 4, 7, 4, 1, 4, 7 { }
3. Scan for lexicographic minimum substring of length 4:
(a) 4,1,4,7 , 4, 1, 4, 7 4, 1, 4, 7 { }→{ } (b) 4, 1,4,7,4 , 1, 4, 7 1, 4, 7, 4 { }→{ } (c) 4, 1, 4,7,4,1 , 4, 7 4, 7, 4, 1 { }→{ } (d) 4, 1, 4, 7,4,1,4 , 7 7, 4, 1, 4 { }→{ } (e) 4, 1, 4, 7, 4,1,4,7 4, 1, 4, 7 { }→{ }
The Duval signature is also 1, 4, 7, 4 . { }
Theorem 3.3.11. Two sets [n] and ′ [n], each of size d, are rotationally equivalent I ⊆ I ⊆ if and only if they have the same Duval signature.
Proof. See Duval [5]. 3.3 : Orthogonal Interpolating Systems 69
Figure 3.9: 0,1,5,12 and 0,4,5,9 are rotationally equivalent. { } { }
Example In the previous two examples, we found that for n = 16, both 0,1,5,12 and { } 0,4,5,9 have the same Duval signature of 1, 4, 7, 4 . By Theorem 3.3.11, they are rota- { } { } tionally equivalent. This is visually verified in Figure 3.9.
Given a collection of sampling sets (sets of cardinality d), we can now filter out sets which are not rotationally equivalent modulo n as follows:
1. Compute the Duval signature for each sampling set.
2. Apply a radix sort to the resulting signatures.
3. Scan the sorted signatures in one pass and pluck off one representative whenever the signature changes.2
Table ?? illustrates this procedure applied to five sets. After applying a radix sort to the
2Note that the approach described here is similar to that employed when identifying anagrams amongst a list of dictionary words. By computing a signature for each word – a sorted list of the letters – and then sorting the words by their signatures, we can construct all anagram equivalence classes, and return a member from each class in linear time if desired. 70 Chapter 3 : Discrete Sampling In Bandlimited Spaces
necklace signature necklace signature
0,1,2,5 1,1,3,3 0,1,2,3 1,1,1,5 { } { } { } { }
3,4,5,6 1,1,1,5 3,4,5,6 1,1,1,5 { } { } { } { }
sort signatures = 0,1,3,4 1,2,1,4 ⇒ 0,1,2,5 1,1,3,3 { } { } { } { }
0,1,2,3 1,1,1,5 2,3,4,7 1,1,3,3 { } { } { } { }
2,3,4,7 1,1,3,3 0,1,3,4 1,2,1,4 { } { } { } { }
Table 3.5: Using Duval signatures to filter unique sets up to rotation, marked in bold.
signatures, we traverse the second column once from top to bottom, and pluck out one rep- resentative from each equivalence class. Here there are three representatives: 0, 1, 2, 3 , { } 0, 1, 2, 5 , and 0, 1, 3, 4 . In general, the runtime of this filtering procedure is O(c d), { } { } · where c is the number of (d 1)-cliques in the difference graph. − If desired, it is also easy to filter out non-equivalent bracelets from a list of sampling sets. If the original list contains c sets, augment it with the reversal of each set in the list, to 3.3 : Orthogonal Interpolating Systems 71
create a new list of 2c sets. Then again use the aformentioned method of computing Duval signatures and radix sort to extract one representative from each equivalence class.
3.3.5 Complete OIS Algorithm for Bandlimited Spaces
By combining the procedures discussed in the previous two subsections, we can build a complete algorithm for finding all orthogonal interpolating systems for a bandlimited space. The block diagram for this algorithm is shown in Figure 3.10. We will now walk through the algorithm with a fully-worked out example.
0-1 1 h construct freqs −1 (n, freqs ) characteristic F find zeroes difference sequence −1 graph h[0]
reconstruction kernel ψ
Sampling Theorems filter cyclic/dihedral append find cliques (orthogonal bases) equivalences (Duval) zeroes of size d-1 {0,2,3,5} {0,2,3,5}, {0,3,5,6} {} sampling , sequences
Figure 3.10: Block diagram for OIS search algorithm.
Fully-worked out example
18: 0,1,8,9,10,17 Using the algorithm in Figure 3.10, compute all orthogonal interpolating systems for B { }.
1. Write a length-18 characteristic sequence describing which frequencies are active:
118: 0,1,8,9,10,17 = 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0 . { } { } 72 Chapter 3 : Discrete Sampling In Bandlimited Spaces
(These are the eigenvalues of the projection matrix K for BJ .)
2. Take the inverse Discrete Fourier Transform of this 0-1 sequence:
1 h := − 118: 0,1,8,9,10,17 F { } 1 1 1 = ,0, ( 1)7/9 1+( 1)2/9 +( 1)4/9 , 0, ( 1)5/9 1+( 1)4/9 +( 1)8/9 , 0, 3 −9 − − − −9 − − − 1 1 0, 0, 1 √9 1+( 1)8/9 , 0, 1 √9 1+( 1)8/9 , 0, 0, 0, 9 − − − 9 − − − 1 1 ( 1)5/9 1+( 1)4/9 +( 1)8/9 , 0, ( 1)7/9 1+( 1)2/9 +( 1)4/9 , 0 . −9 − − − −9 − − − 3. Zero set of h is Z := 1, 3, 5, 6, 7, 9, 11, 12, 13, 15, 17 . { } 4. Construct the difference graph from Z. (See Figure 3.11.)
11
5 17
12 15 7 6
13 3
1 9
18: 0,1,2,9,10,11 Figure 3.11: Difference graph associated with B { }
5. Find all cliques of size 5 in this graph. The 5-cliques are 1, 6, 7, 12, 13 , 3, 6, 9, { } { 12, 15 , and 5, 6, 11, 12, 17 , as highlighted in Figure 3.12. } { } 3.3 : Orthogonal Interpolating Systems 73
11 11 11
5 17 5 17 5 17
12 12 12 15 15 15 7 6 7 6 7 6
13 3 13 3 13 3
1 9 1 9 1 9
18: 0,1,2,9,10,11 Figure 3.12: 5-cliques in the difference graph for B { }.
0, 3, 6, 9, 12, 15 0, 1, 6, 7, 12, 13 { } { }
18: 0,1,2,9,10,11 Figure 3.13: Orthogonal sampling sets for B { }.
6. Construct sampling sets by appending 0 to each of the cliques:
0, 1, 6, 7, 12, 13 , 0, 3, 6, 9, 12, 15 , 0, 5, 6, 11, 12, 17 . { } { } { }
7. Use Duval signatures and a radix sort to filter out rotationally equivalent sets modulo 18. Afterwards, only two of three sets remain, as shown in Figure 3.13. (Observe that 0, 1, 6, 7, 12, 13 0, 5, 6, 11, 12, 17 +1 (mod 18).) { }≡{ } 74 Chapter 3 : Discrete Sampling In Bandlimited Spaces
h 8. Lastly, we include the orthogonal interpolating bases. Let ψ = h[0] . This function, shown in Figure 3.14, will act like the sinc function from Shannon’s sampling theo- rem.
Ψ Amplitude
1.0æ
æ æ 0.8
0.6
æ æ 0.4
0.2
æ æ æ æ æ æ æ æ æ æ æ Time 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
-0.2 æ æ
18: 0,1,8,9,10,17 Figure 3.14: OIS basis vector for B { }.
Let ψk denote the function ψ barrel-shifted to the right by k units. Then the sampling 18: 0,1,8,9,10,17 theorem for the space B { } is:
18: 0,1,8,9,10,17 f B { } : f = k f[k]ψk ∀ ∈ ∈I P where is any rotation of the sequences 0, 1, 6, 7, 12, 13 or 0, 3, 6, 9, 12, 15 . I { } { } Note how serves the dual role of specifying both 1) where to take samples, and also I 2) how much to shift the ψ’s by, just as the integers do in Shannon’s sampling theorem.
Under these choices of shifts, the ψk’s are orthogonal to each other. This is an exhaustive classification of orthogonal interpolating systems for this space – there are no others.
Computational Complexity
The runtime of the OIS algorithm can be broken down as follows: 3.3 : Orthogonal Interpolating Systems 75
0-1 1 h construct freqs −1 (n, freqs ) characteristic F find zeroes difference sequence −1 graph h[0]
reconstruction kernel ψ
Sampling Theorems filter cyclic/dihedral append find cliques (orthogonal bases) equivalences (Duval) zeroes of size d-1 {0,2,3,5} {0,2,3,5}, {0,3,5,6} {} sampling , sequences
n = length of vector
d = number of frequencies
Z = number of zeroes in h | |
c = number of cliques found
1. 0-1: O(d)
1 2. − : O(n log n) F
3. Find zeroes: O(n)
4. Difference graph: O( Z 2) | |
d 1 3 5. Find cliques of size d 1: O( Z − ) using backtracking − | |
6. Append zeroes: O(1)
7. Filter cyclic equivalences: O(c d) (Duval signatures and radix sort) ·
The overarching bottleneck here is the clique finding step, for which a backtracking
d 1 algorithm still has O( Z − ) runtime in the worst case, which is exponential in d. In | | 3Or, polynomial time in both Z and d if the difference graphs are perfect. | | 76 Chapter 3 : Discrete Sampling In Bandlimited Spaces
general, finding all cliques of a certain size is an NP-complete problem. However, if our class of graphs exhibits special structure, such as perfection, then cliques can be found in polynomial time. This is the topic of the following subsection.
3.3.6 Perfect Graph Conjecture for Bandlimited Spaces
Perfect graphs are yet another concept motivated by Claude Shannon, when Claude Berge sought to determine the zero-error capacity of graphs – an idea put forth by Shannon in [6]. To explain what a perfect graph is, we review the definitions of clique number and chromatic number for a graph G. The clique number, denoted by ω(G), is simply the size of the largest clique. On the other hand, the chromatic number, denoted by γ(G), is the smallest number of colors needed to color the vertices such that no two adjacent vertices have the same color. It is clear that ω(G) γ(G), because all members of a clique must ≤ have different colors. A perfect graph is a graph in ω(G)= γ(G). Grotschel, Lov´asz, and Schrijver [7] proved that for perfect graphs, cliques can be computed in polynomial time through semidefinite programming. The connection of all this to our work is the following conjecture:
Conjecture 3.3.12. Difference graphs for bandlimited spaces are perfect.
To provide empirical evidence, Figure 3.15 shows many difference graphs associated with different bandlimited spaces.4 Figures 3.16 and 3.17 show some larger difference graphs. The maximum clique and minimal coloring are also identified to illustrate perfec- tion. To date, we have not found an imperfect difference graph.
4If Conjecture 3.3.12 is false, a backup conjecture is that bandlimited spaces which do have orthogonal interpolating systems have perfect difference graphs. This weaker conjecture would also allow us to theoret- ically find every OIS in polynomial time, because checking in advance whether or not a graph is perfect is (theoretically) polynomial time (see Chudnovsky and Seymour [8]). However, most of these polynomial time algorithms are too complex to implement in practice. 3.3 : Orthogonal Interpolating Systems 77
8 7 5 11
2 1 1 12 15
6 9 4 7 10 5
4 11 13 3
B12:0,1,4,5,8,9 B16:0,2,8,10
17 11 1 13 28 4
5 7 16 12 36 12 6 48 4 2 60 52 32
10 16 14 8 44 20
B18:0,1,2,6,7,8,12,13,14 B64:0,1,2,3,8,9,10,11
16 2 25 14 15 10 14 10 12 15 4 26 8 4 20 6 5 28 16 22 3 2 8
B18:0,2,3,4,5,6,7,8,10 B30:0,3,5,8,10,13
9 7 2 6 12
1 15 10
8
14 11 13
18 24 6 3 5 15
B18:0,2,3,4,5,6,7,8,10 B30:0,3,5,8,10,13
Figure 3.15: A menagerie of (perfect) difference graphs for various bandlimited spaces. 78 Chapter 3 : Discrete Sampling In Bandlimited Spaces
11 115 31 11 115 31 83 103 143 83 103 143 35 43 119 71 35 43 119 71 67 47 67 47 91 7 91 7 139 107 27 63 55 79 139 107 27 63 55 79 99 135 99 135 19 59 95 127 19 59 95 127 8 8 131 56 80 23 131 56 80 23 40128 16 40128 16 112 72 88 112 72 88 13664104 13664104 89 32 89 32 121 85 133 121 85 133 17 117 53 17 117 53 49 81 125 49 81 125 45 45 65 9 61 65 9 61 73 13 73 13 1 137 25 29 101 37 1 137 25 29 101 37 97 77 97 77 113 41 109 5 113 41 109 5
Figure 3.16: Difference graph for B144:0,27,30,35,60,72,75,83,102,123,131,132. ω(G)= γ(G)=11. 3.3 : Orthogonal Interpolating Systems 79
58 106 58 106 142 34 142 34 46 22 46 22 138 82 138 82 51 33 94 118 51 33 94 118 105 15 30 105 15 30 102 10 130 102 10 130 69 87 69 87 90 12084 70 90 12084 70 141123 141123 12654 36 4812 12654 36 4812 108 108 39 129 18 78 13296 39 129 18 78 13296 66 6024 62 66 6024 62 75 21 75 21 111 98 111 98 3 42 38 50 3 42 38 50 93 57 93 57 6 114 26 110 6 114 26 110 86 86 122 2 122 2 74 74 134 14 134 14
Figure 3.17: Difference graph for B144:0,16,30,44,58,74,80,94,108,110,124,138. ω(G)= γ(G)=11. 80 Chapter 3 : Discrete Sampling In Bandlimited Spaces
3.3.7 h-Nullstellensatz
The vertex sets in our difference graphs are zero sets from the IDFTs of binary sequences of length n. These zero sets have a precise group-theoretic structure. In this section, we prove the following theorem, whose name has been chosen with tongue-in-cheek:
h-Nullstellensatz: Let h be the IDFT of a binary sequence of length n. If h[m]=0, then h[mr]=0 for all r coprime to n. Here the product mr is interpreted modulo n. One can use the h-Nullstellensatz to generate locations of more zeros in h after having pinpointed the location of only one
5 18: 0,1,8,9,10,17 zero. For example, in the difference graph below for B { }, notice that 3 is a zero (vertex). Since 5 is coprime to 3, the h-Nullstellensatz says that 3 5 = 15 is also a · zero, and indeed it is.
11
5 17
12 15 7 6
13 3
1 9
We will also argue the following corollary:
5In fact, the total number of zeros that can be generated from h[m]=0 is φ(n/(n,m)). 3.3 : Orthogonal Interpolating Systems 81
Corollary: The zero set of h, as well as its complement, both have the form
n (Z/sZ)× s · s G where each s is distinct. This is a disjoint union of multiplicative groups of integer residues
n modulo s, where each group is scaled by the coefficient s . These results will help prove some general facts about orthogonal interpolating systems in bandlimited spaces.
Notation
Let G denote a group, and G is its size. For g G, ord(g) is the smallest positive integer | | ∈ m such that gm =1. The greatest common divisor of integers a and b is denoted by (a, b).
The nth cyclotomic polynomial is denoted by Φn(x). The totient function is denoted by φ(n).
Lemmas
Our proof of the h-Nullstellensatz requires three Lemmas from basic abstract algebra and number theory. Readers already familiar with them are invited to skip ahead.
Lemma 3.3.13. Let x be an element of a cyclic group G, where ord(x) = n. Then
m n ord(x )= (m,n) .
Proof. Let t := ord(xm). Then (xm)t =1 can be rewritten as xmt =1, which implies that
n mt. | 82 Chapter 3 : Discrete Sampling In Bandlimited Spaces
Dividing both sides by (m, n),
n m t. (m, n) | (m, n) ·
n m Since (m,n) and (m,n) are relatively prime, it follows by Euclid’s Lemma that
n t. (m, n) |
n m On the other hand, since (xm) (m,n) =(xn) (m,n) =1, it follows that
n t . | (m, n)
Lemma 3.3.14. Let α be a primitive nth root of unity. Then the cyclotomic polynomial
Φ (x) := (x ωk) n − n k [n] (k,nY∈)=1 is the “minimal polynomial” of α – the unique monic irreducible polynomial over Z with α as a root.
Proof. The proof is long enough to be too much of a digression to include here, but it can be found in many abstract algebra textbooks, such as [9].
Lemma 3.3.15. If (r, n)=1, then (m, n)=(mr, n).
αi Proof 1. By the Unique Prime Factorization Theorem, we may write m = i pi , n = Q 3.3 : Orthogonal Interpolating Systems 83
βi γi i pi , and r = i pi where the pi’s range over all primes. By definition, Q Q min(αi,βi) (m, n)= pi i Y
where min(αi, βi) is defined to be zero if the ith prime is not a divisor of either m or n. On
αi+γi the other hand, writing mr = i pi , Q min(αi+γi,βi) min(αi,βi) (mr, n)= pi = pi i i Y Y where the last equality holds because r and n have no nontrivial common divisors, so that
γi =0 whenever βi > 0.
Proof 2. Let d =(m, n). Then there exist integers x0 and y0 such that
d = mx0 + ny0.
Multiplying this equation by r,
r d = mr x + n ry . · · 0 · 0 84 Chapter 3 : Discrete Sampling In Bandlimited Spaces
Since (r, n)=1, there exist integers x1,y1 such that 1= rx1 + ny1. Thus,
rx d = mr x x + n rx y 1 · · 1 0 · 1 0 (1 ny )x d = mr x x + n rx y − 1 1 · · 1 0 · 1 0 d = mr x x +n (rx y + dy ) · 1 0 · 1 0 1 x2 y2 d = mr |x{z+} n y |. {z } · 2 · 2
From this relation, we see that any divisor of both mr and n is also a divisor of d. Thus (mr, n) (m, n). On the other hand, clearly (mr, n) (m, n), since a number which ≤ ≥ divides both m and n must also divide mr and n. Thus, (mr, n)=(m, n).
Proof of h-Nullstellensatz
Theorem 3.3.16. Let h be the IDFT of a binary sequence of length n. Then if h[m]=0 and r is relatively prime to n, then h[mr]=0.
Proof. Let denote the set over which the binary sequence hˆ is nonzero; that is, J
1 for j ˆ ∈J h[j]= 0 else.
The inverse Discrete Fourier Transform of hˆ, denoted by h, may then be expressed as
n 1 − ˆ mj jm h[m]= h[j]ωn = ωn . j=0 j X X∈J 3.3 : Orthogonal Interpolating Systems 85
j Now suppose h[m]=0. Defining p (x) := j J x , this condition is equivalent to J ∈ P m p (ωn )=0. J
m Since ωn has order n, Lemma 3.3.13 indicates that ωn has order s, where
s = n/(n, m).
Lemma 3.3.14 states that the cyclotomic polynomial Φs(x) is the unique irreducible poly- m nomial over the integers for which ωn is a root. Since Z[x] is a Unique Factorization Domain, p (x) can be uniquely factored into irreducible polynomials with integer coeffi- J m cients. Thus p (ωn )=0 implies that Φs(x) is a factor of p (x). All roots of Φs(x) are J J then also roots of p (x). By definition, the roots of Φs(x) are J
k ω : gcd(k,s)=1 = (Z/sZ)× . { s } ∼
(These are all possible primitive sth roots of unity.) Now consider evaluating h[mr], for r coprime to m:
n 1 − ˆ (mr) j mr j mr h[mr]= h[j](ωn) · = (ωn ) = p (ωn ) J j=0 j X X∈J
mr By Lemma 3.3.13, the order of ωn is n/(mr, n). However, from Lemma 3.3.15, (mr, n)= mr s/n mr/(m,n) mr mr · (m, n), so ωn is also a primitive sth root of unity. (In fact, ωn = ωn s/n = ωs , · mr and mr/(m, n) is coprime to n.) Thus ωn is a root of Φs(x), and in turn a root of p (x). J Hence h[mr]=0.
In the proof of Theorem 3.3.16, we saw that h[m]=0 actually implies the presence 86 Chapter 3 : Discrete Sampling In Bandlimited Spaces
of φ(s) distinct zeros in h, each of which is associated with a different root of Φs(x). One may wonder if all of these zeros must be of the form h[mr]=0 for r coprime to n. That is, are any zeros “missed” due to the phrasing of the theorem? The following remark assures that none are missed.
Remark 3.3.17. Given h[m]=0, the set
h[mr] (r, n)=1 { | }
consists of φ(s) distinct zeros, where s = n/(m, n).
Proof. The remark is equivalent to saying that all primitive sth roots of unity can be written
mr t in the form ωn for some r coprime to n. To prove this constructively, suppose ωs is a primitive sth root of unity that we wish to write in this form. Then (s, t)=1, and t lies in
(Z/sZ)×. Next, observe that
m m s/n m/(m,n) ωn = ωs · = ωs .
The exponent m/(m, n) is coprime to s, and thus also lies in (Z/sZ)×. Since (Z/sZ)× is m a group, there exists a unique r (Z/sZ)× such that r t mod φ(n). Thus, ∈ (m,n) · ≡
m r m r n/s t (m,n) · (m,n) · · mr ωs = ωs = ωs n/s = ωn . ·
Remark 3.3.18. We could have alternatively phrased Theorem 3.3.16 as follows: h[m]=0 if and only if h[mr] =0, for r coprime to n. However, the reverse direction follows immediately since the set of numbers coprime to n is a group. So given h[mr]=0, there
1 1 exists an r− coprime to n such that 0= h[mr r− ]= h[m]. · 3.3 : Orthogonal Interpolating Systems 87
By reapplying Theorem 3.3.16 to the set of all zeros in h, we have the following corol- lary:
Corollary 3.3.19. The zero set of h has the form
n m h[m]=0 = (Z/sZ)× { | } s · s G where each s is distinct, and the union is disjoint.
Proof. Suppose h[m]=0. Defining s = n/(m, n), we know that Φs(x) is a factor of k p (x). Thus ωs is a root of p (x) for any k coprime to s. Hence, J J
n 1 − k k n/s k n/s j (k n/s) j ˆ (k n/s) j n 0= p (ωs )= p (ωn· )= (ωn· ) = (ωn) · · = h[j](ωn) · · = h k . J J s · j j j=0 X∈J X∈J X h i
n Thus (Z/sZ)× is a subset of the zero set. The disjointness of the union follows from the s · fact that primitive roots of the same order all lie in the same multiplicative group (Z/sZ)×. So zeros associated with primitive roots of a different order must reside in a different group.
Below are some applications of the aforementioned theorems.
Example 1
18: 0,1,8,9,10,17 Let us revisit B { }. The characteristic sequence is
hˆ = 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1 { } 88 Chapter 3 : Discrete Sampling In Bandlimited Spaces
whose inverse DFT is
1 ( 1)7/9 ( 1)5/9 h = , 0, − 1+( 1)2/9 +( 1)4/9 , 0, − 1+( 1)4/9 +( 1)8/9 , 3 − 9 − − − 9 − − 1 1 0, 0, 0, 1 √9 1+( 1)8/9 , 0, 1 √9 1+( 1)8/9 , 0, 0, 0, 9 − − − 9 − − − ( 1)5/ 9 ( 1)7/9 − 1+( 1)4/9 +( 1)8/9 , 0, − 1+( 1)2/9 +( 1)4/9 , 0 . − 9 − − − 9 − − The corresponding zero set is
Z := 1, 3, 5, 6, 7, 9, 11, 12, 13, 15, 17 { } which is the vertex set of the difference graph for BJ :
11
5 17
12 15 7 6
13 3
1 9
We now show how to build this zero set from only a few of its elements via Theorem 3.3.16.
1. First, consider 1 Z. Then ∈
j 0= h[1] = ω18 =0. j X∈J 3.3 : Orthogonal Interpolating Systems 89
j Defining the polynomial p (x) := j x , the above equation translates to J ∈J P
p (ω18)=0. J
Thus Φ18(x), the minimal polynomial of ω18, is a factor of P (x). The roots of J k Φ18(x) consist of all ω18 where the exponent k is coprime to 18:
1 5 7 11 13 17 ω18, ω18, ω18, ω18, ω18, ω18.
Note that the exponents together comprise the multiplicative group (Z/18Z)×. Since these roots are also zeros of p (x), J
5 5j 0= PJ (ω18)= (ω18) = h[5] j J X∈ 7 7j 0= PJ (ω18)= (ω18) = h[7] j J X∈ 11 11j 0= PJ (ω18)= (ω18) = h[11] j J X∈ 13 13j 0= PJ (ω18)= (ω18) = h[13] j J X∈ 17 17j 0= PJ (ω18)= (ω18) = h[17]. j J X∈ Thus, knowing that 1 is in Z leads to knowing that 5, 7, 11, 13, 17 also lie in Z.
2. Next, consider 3 Z. This translates to ∈
3 j 3 0= h[3] = (ω18) = p (ω18)=0. J j X∈J
3 From Lemma 3.3.13, ω18 is a sixth root of unity. Thus Φ6(x) is a factor of p (x). J 90 Chapter 3 : Discrete Sampling In Bandlimited Spaces
The roots of Φ6(x) are
1 3 ω6 = ω18,
5 15 ω6 = ω18.
Since the roots of Φ6(x) are also zeros of p (x), J
3 3j 0= pJ (ω18)= (ω18) = h[3], j J X∈ 15 15j 0= pJ (ω18)= (ω18) = h[15]. j J X∈
3. Next, consider 6 Z. ∈
6 j 6 0= h[6] = (ω18) = p (ω18)=0. J j X∈J
6 From Lemma 3.3.13, ω18 is a third-order root of unity. Thus Φ3(x) is a factor of
p (x). The roots of Φ3(x) are J
1 6 ω3 = ω18,
2 12 ω3 = ω18.
Since the roots of Φ3(x) are also zeros of p (x), J
6 6j 0= pJ (ω18)= (ω18) = h[6], j J X∈ 12 12j 0= pJ (ω18)= (ω18) = h[12]. j J X∈ 3.3 : Orthogonal Interpolating Systems 91
4. Lastly, consider 9 Z, the only remaining zero. ∈
6 j 6 0= h[9] = (ω18) = p (ω18)=0. J j X∈J
9 From Lemma 3.3.13, ω18 is a second-order root of unity. Thus Φ2(x) is a factor of 1 9 p (x). Φ2(x) has only one root, namely ω2 = ω18, so no new zeros are deduced. J
We conclude that the zeros of h can be partitioned as follows:
Z := 1, 3, 5, 6, 7, 9, 11, 12, 13, 15, 17 { } = 1, 5, 7, 11, 13, 17 3, 15 6, 12 9 { } { } { } { } =1 1, 5, 7, 11, 13, 17G 3 G1, 5 6 G1, 2 9 1 ·{ } ·{ } ·{ } ·{ } 18 Z × 18 G Z × G18 Z ×G 18 Z × = 18 · 18Z 6 · 6Z 3 · 3Z 2 · 2Z 18 G G G = (Z/sZ)× . s · s 2,3,6,18 ∈{ G }
Similarly, the complement of the zero set can be partitioned as follows:
Zc := 0, 2, 4, 8, 10, 14, 16 { } = 0 2 2, 4, 5, 7, 8 { } ·{ } G Z × = 0 2 { } · 9Z G18 = (Z/sZ)× . s · s 1,9 ∈{G } 92 Chapter 3 : Discrete Sampling In Bandlimited Spaces
Example 2
Consider the following binary sequence of length n = 24, with ones in locations := J 0, 4, 6, 10 : { }
hˆ = 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 { }
The zero set of h is Z := 2, 3, 6, 9, 10, 14, 15, 18, 21, 22 . { } Following the same procedure as in the previous example, this set can be written as the following disjoint union of scaled multiplicative groups:
Z := 2, 3, 6, 9, 10, 14, 15, 18, 21, 22 = 6, 18 3, 9, 15, 21 2, 10, 14, 22 { } { } { } { } =6 1, 3G 3 1, 3, 5, 7G 2 1, 5, 7, 11 ·{ } ·{ } ·{ } 24G G = (Z/sZ)× . s · s 4,8,12 ∈{G } Similarly, the complement of the zero set can be partitioned as follows:
Zc := 1, 4, 5, 7, 8, 11, 12, 13, 16, 17, 19, 20, 23 { } = 0 12 1 8 1, 2 4 1, 3 1, 5, 7, 11, 13, 17, 19, 23 { } ·{ } ·{ } ·{ } { } G 24 G G G = (Z/sZ)× . s · s 1,2,3,6,24 ∈{ G } 3.3.8 Vanishing Sums of Roots of Unity
Finding orthogonal interpolating systems for bandlimited spaces requires focusing on the zeros of h. It is worth noting that these zeros have a geometric interpretation. Observe that 3.3 : Orthogonal Interpolating Systems 93
if hˆ is the 0-1 sequence of length n with support , then J
h[m]= hˆ[k]ωkm = ωjm. k j J X X∈ Thus, every entry of h is the vector sum of d = roots of unity. What’s more, h[m] can |J| be constructed by taking the vectors from h[1] and multiplying each of their phases by m. We are concerned with when these sums vanish.
1 2 3 4
5 6 7
8: 0,1,4,5 Figure 3.18: Geometric construction of h for B { }. (Multiplicities not shown.)
As an example, Figure 3.18 shows how to graphically construct the entries of h for
8: 0,1,4,5 B { }, without taking a DFT. We start with the arrangement shown for h[1], in which roots of unity are positioned on the 0th, 1st, 4th, and 5th spokes. Then h[1] is equal to this vector sum, which vanishes. To construct h[2], multiply each of the phases from h[1] by 2. The vector at spoke 0 stays put, the vector at spoke 1 moves to spoke 2, the vector at spoke 94 Chapter 3 : Discrete Sampling In Bandlimited Spaces
4 moves to spoke 8 0, and the vector at spoke 5 moves to spoke 10 2. Afterwards, ≡ ≡ we end up with two vectors at spoke 0, and two vectors at spoke 2. h[2] is equal to this vector sum, which clearly does not vanish.6 In this fashion, we can graphically construct all entries of h, boxing those entries which vanish, as indicated in Figure 3.18.
18: 0,1,8,9,10,17 A second example is shown in Figure 3.19, which illustrates the vector sums for B { }.
1 3 5 6 7 9
11 12 13 15 17
18: 0,1,8,9,10,17 Figure 3.19: Geometric construction of h for B { }. (Multiplicities not shown.)
There is a caveat to this approach however, which is it is not always obvious from a glance whether or not an arrangement of roots of unity will vanish. Figure 3.20 shows three examples of minimal vanishing sums, in which the removal of any vector will cause the sum to no longer vanish. While equally spaced roots of unity are easy to imagine, L´aszl´oR´edei [10] conceived an irregularly spaced minimal vanishing sum of 30th roots of
6Testing whether a vector sum vanishes can be cast as a physics problem. Think of the vectors as pennies positioned along a metal hoop that is parallel to the ground and supported only by a rod at its center. Then, does the hoop tip over? 3.3 : Orthogonal Interpolating Systems 95
unity, shown in the middle figure. And the rightmost arrangement, by John Steinberger, is an irregularly spaced minimal vanishing sum of 105th roots of unity, some of which have multiplicity two [11].
1.0 1.0 1.0
0.5 0.5 0.5
-1.0 -0.5 0.5 1.0 -1.0 -0.5 0.5 1.0 -1.0 -0.5 0.5 1.0
-0.5 -0.5 -0.5
-1.0 -1.0 -1.0
Figure 3.20: Minimal vanishing sums of roots of unity.
3.3.9 Necessary Conditions
n Theorem 3.3.20. Let BJ be a bandlimited subspace of C , with dimension d. If d> n/2, there can be no orthogonal interpolating system.
Proof. We prove by contradiction. Suppose BJ has an OIS with sampling sequence . I c Define = [0 : N 1] . Then by Theorem 2.4.9, BJ = span e + v : i where I − \ I { i i ∈ I} c = d and the v are orthogonal vectors in CI , or some possibly zero. However, none |I| i of the v ’s can be zero, because if v =0, then e BJ , which is a contradiction since the i i i ∈ DFT of e has no zeroes. Since > n/2, c < n/2. Thus v : i is a nonempty i |I| |I | { i ∈ I} c collection of more than n/2 orthogonal vectors, each residing in a space CI of dimension 96 Chapter 3 : Discrete Sampling In Bandlimited Spaces
less than n/2, which is not possible.
Based on numerical evidence, it seems that a stronger statement can be made than this.
Conjecture 3.3.21. A bandlimited space has an OIS only if d n. |
If the divisibilityconjecture is true, Theorem 3.3.20 would have been an immediate con- sequence. The conjecture is illustrated in Tables 3.6 and 3.7, which are sampling dictionar- ies for n = 6 and n = 8 that have been augmented to distinguish between non-orthogonal and orthogonal interpolating systems. Subspaces having orthogonal interpolating systems are highlighted in yellow. Observe that in each of these cases, the number of frequencies divides into the length of the signal. We have been only been able to prove the divisibility conjecture for a number of special cases.
n d frequencies “plain” IS orthogonal IS non-IS 6 1 0 0 { } {} { } {} 6 2 0,1 0,1 ; 0,2 0,3 { } { } { } { } {} 6 2 0,2 0,1 ; 0,2 0,3 { } { } { } {} { } 6 2 0,3 0,1 ; 0,3 0,2 { } {} { } { } { } 6 3 0,1,2 0,1,2 ; 0,1,3 ; 0,2,3 0,2,4 { } { } { } { } { } {} 6 3 0,1,3 0,1,2 ; 0,1,3 ; 0,2,3 0,2,4 { } { } { } { } {} { } 6 3 0,2,3 0,1,2 ; 0,1,3 ; 0,2,3 0,2,4 { } { } { } { } {} { } 6 3 0,2,4 0,1,2 ; 0,2,4 0,1,3 ; 0,2,3 { } {} { } { } { } { } 6 4 0,1,2,3 0,1,2,3 ; 0,1,2,4 ; 0,1,3,4 { } { } { } { } {} {} 6 4 0,1,2,4 0,1,2,3 ; 0,1,2,4 0,1,3,4 { } { } { } {} { } 6 4 0,1,3,4 0,1,2,3 ; 0,1,3,4 0,1,2,4 { } { } { } {} { } 6 5 0,1,2,3,4 0,1,2,3,4 { } { } {} {} 6 6 0,1,2,3,4,5 0,1,2,3,4,5 { } {} { } {}
Table 3.6: Full sampling dictionary for n =6. 3.3 : Orthogonal Interpolating Systems 97
3.3.10 Orthogonal Interchange Conjecture
Theorem 3.2.2 asserts that in a sampling dictionary, frequency sets and sampling sets can always be interchanged. The following conjecture asserts that orthogonality or the lack thereof is also preserved in this interchange.
Conjecture 3.3.22. BJ has an as an orthogonal interpolating system if and only if BI I has as an orthogonal interpolating system. J Another way of phrasing the conjecture is: if the augmented sampling dictionary is inverted, nothing changes but the column headers. This is demonstrated in Table 3.8.
Below are some special cases where we can prove Conjecture 3.3.22.
Lemma 3.3.23. When d =2, BJ has as an OIS if and only if BI has as an OIS. I J
Proof. Suppose BJ has OIS . Since =2, ∆ = i i , and the system of equations I |I| I { 2 − 1} h[m]=0: m ∆ reduces to a single equation: ∈ I
(i2 i1)j1 (i2 i1)j2 0= ω − + ω − .
Reorganizing this equation,
(i2 i1)j1 (i2 i1)j2 0= ω − + ω −
(i2 i1)j1 (i2 i1)j2 ω − = ω − − (i2 i1)j2 (i2 i1)j1 1= ω − − − − (i2 i1)(j2 j1) = ω − − − (j2 j1)i2 (j2 j1)i1 = ω − − − − (j2 j1)i1 (j2 j1)i2 ω − = ω − − (j2 j1)i1 (j2 j1)i2 0= ω − + ω − . 98 Chapter 3 : Discrete Sampling In Bandlimited Spaces
T Since = 2, this last equation is equivalent to the matrix equation E∆ ∗E 1 = 0, |J| J F I implying that BI has as an OIS. J
Corollary 3.3.24. When d =2, if BJ has as an OIS, then n is even. I
(i2 i1)j2 (i2 i1)j1 Proof. Following the proof of Lemma 3.3.23, the equation 1 = ω − − − re- − (i2 i1)(j2 j1) quires that ω − − = 1, by the uniqueness of the additive inverse. This implies − that 2π i (i i )(j j )= (2k + 1)π, k N. n 2 − 1 2 − 1 I ∈ or equivalently, n (i i )(j j )= nk + , k N. 2 − 1 2 − 1 2 ∈
n Since the left-hand side is an integer, and nk is an integer, 2 must be an integer.
Lemma 3.3.25. When d =3, BJ has as an OIS if and only if BJ has as an OIS. I J
Proof. The four page proof can be found in Appendix E.
3.3.11 Fuglede’s Conjecture
Our study of orthogonal interpolating systems in bandlimited spaces is related to the dis- crete version of Fuglede’s conjecture, a still unsettled problem posed by Bent Fuglede in 1974. The conjecture states that a bandlimited space7 has an orthogonal interpolating sys- tem if and only if its support in the Fourier domain is a tile. For example, the bandlimited
8: 0,1,4,5 space B { } has an OIS, and the frequency set 0, 1, 4, 5 tiles the torus Z/8Z, since { }
0, 1, 4, 5 ( 0, 1, 4, 5 +2)= 0, 1, 2, 3, 4, 5, 6, 7 = Z/8Z. { } ∪ { } { } 7Also referred to as a Paley-Wiener space, P W (S), where S is the support in the Fourier domain. 3.3 : Orthogonal Interpolating Systems 99
16;: 0,4,6,10 For another example, the bandlimited space B { } has an OIS, and Z/16Z has the tiling
0, 4, 6, 10 ( 0, 4, 6, 10 + 1) ( 0, 4, 6, 10 + 8) ( 0, 4, 6, 10 +9)= Z/16Z. { } ∪ { } ∪ { } ∪ { }
Notice that if Fuglede’s conjecture is true, the divisibility conjecture is an immediate corol- lary. Fuglede’s conjecture can be stated in higher dimensions, in which case the Fourier transform is multi-dimensional, and the domain need not be discrete. In 2003, Tao [12] provided a counterexample to Fuglede’s conjecture in dimension 5 . This was followed by counterexamples in dimensions 3 and 4, in both directions of the conjecture. However, the conjecture remains open in dimensions 1 and 2. Returning to dimension 1, it is also unknown whether there exists a simple way to determine if a (frequency) set tiles the torus Z/nZ. Coven and Meyerowitz [13] have J provided two conditions, named T1 and T2, which have been proven sufficient for tiling, but not proven necessary. To state these conditions, we first establish a few definitions.
j Definition 3.3.26. Given n and Z/nZ, define the polynomial p (x) := j x . J ⊆ J ∈J Then we define P
Cn: = C := d : Φd(x) p (x) J J { | J } where Φd(x) is the dth cyclotomic polynomial. We also define Dn: = D to be the subset J J of C consisting only of prime powers. J
Definition 3.3.27 (T1). A set Z/nZ satisfies T1 if p (1) = pk D p. J ⊆ J ∈ J Note that in condition T1, we multiply the prime bases and notQtheir powers.
Definition 3.3.28 (T2). A set Z/nZ satisfies T2 if, given two different prime powers J ⊆ pe and qf in D , their product peqf is an element of C . J J 100 Chapter 3 : Discrete Sampling In Bandlimited Spaces
The chart in Figure 3.21, adapted from Amiot [14], shows the currently known relation- ships between T1, T2, tiling, and the existence of an OIS. T1 and T2 are together sufficient for to be a tiling. While T1 is necessary for tiling, it is not known whether T2 is also J necessary. T2’s necessity has been shown in the case where n is the product of at most two prime powers. Lastly, it was proven by Laba [15] that T1 and T2 together are sufficient for
n: B J has an orthogonal interpolating system.
Tiles Z/nZ T1
Tiles Z/nZ and n is product of at T1 and T2 most two prime powers
B(n:J) has an orthogonal interpolating system
Figure 3.21: Implications between T1, T2, tiling, and existence of OIS. 3.3 : Orthogonal Interpolating Systems 101
n d frequencies orthogonal IS 8 1 0 0 8 2 {0,1} {0,4} 8 2 {0,2} {0,2} 8 2 {0,3} {0,4} 8 2 {0,4} 0,1{ ; 0,3} 8 3 {0,1,2} { } { } 8 3 {0,1,3} {} 8 3 {0,1,4} {} 8 3 {0,2,3} {} 8 3 {0,2,4} {} 8 3 {0,2,5} {} 8 3 {0,3,4} {} 8 4 {0,1,2,3} 0,2,4,6{} 8 4 {0,1,2,4} { } 8 4 {0,1,2,5} {} 8 4 {0,1,3,4} {} 8 4 {0,1,3,5} {} 8 4 {0,1,4,5} 0,1,4,5{} 8 4 {0,2,3,4} { } 8 4 {0,2,3,5} 0,2,4,6{} 8 4 {0,2,4,5} { } 8 4 {0,2,4,6} 0,1,2,3 {}; 0,2,3,5 8 5 {0,1,2,3,4} { } { } 8 5 {0,1,2,3,5} {} 8 5 {0,1,2,4,5} {} 8 5 {0,1,2,4,6} {} 8 5 {0,1,3,4,5} {} 8 5 {0,1,3,4,6} {} 8 5 {0,2,3,4,5} {} 8 6 {0,1,2,3,4,5} {} 8 6 {0,1,2,3,4,6} {} 8 6 {0,1,2,3,5,6} {} 8 6 {0,1,2,4,5,6} {} 8 7 {0,1,2,3,4,5,6} {} 8 8 {0,1,2,3,4,5,6,7} 0,1,2,3,4,5,6,7{} { } { }
Table 3.7: Full sampling dictionary for n =8. 102 Chapter 3 : Discrete Sampling In Bandlimited Spaces
sampling dictionary for n = 6 frequencies interpolating systems (IS) OIS non-IS 0 0 { } {} { } {} 0,1 0,1 , 0,2 0,3 { } { } { } { } {} 0,2 0,1 , 0,2 0,3 { } { } { } {} { } 0,3 0,1 , 0,3 0,2 { } {} { } { } { } 0,1,2 0,1,2 , 0,1,3 , 0,2,3 0,2,4 { } { } { } { } { } {} 0,1,3 0,1,2 , 0,1,3 , 0,2,3 0,2,4 { } { } { } { } {} { } 0,2,3 0,1,2 , 0,1,3 , 0,2,3 0,2,4 { } { } { } { } {} { } 0,2,4 0,1,2 , 0,2,4 0,1,3 , 0,2,3 { } {} { } { } { } { } 0,1,2,3 0,1,2,3 , 0,1,2,4 , 0,1,3,4 { } { } { } { } {} {} 0,1,2,4 0,1,2,3 , 0,1,2,4 0,1,3,4 { } { } { } {} { } 0,1,3,4 0,1,2,3 , 0,1,3,4 0,1,2,4 { } { } { } {} { } 0,1,2,3,4 0,1,2,3,4 { } { } {} {} 0,1,2,3,4,5 0,1,2,3,4,5 { } {} { } {} inverse sampling dictionary for n = 6 sampling set frequencies for which frequencies for which non-reconstructible is an IS is an OIS frequencies I I I 0 0 { } {} { } {} 0,1 0,1 , 0,2 0,3 { } { } { } { } {} 0,2 0,1 , 0,2 0,3 { } { } { } {} { } 0,3 0,1 , 0,3 0,2 { } {} { } { } { } 0,1,2 0,1,2 , 0,1,3 , 0,2,3 0,2,4 { } { } { } { } { } {} 0,1,3 0,1,2 , 0,1,3 , 0,2,3 0,2,4 { } { } { } { } {} { } 0,2,3 0,1,2 , 0,1,3 , 0,2,3 0,2,4 { } { } { } { } {} { } 0,2,4 0,1,2 , 0,2,4 0,1,3 , 0,2,3 { } {} { } { } { } { } 0,1,2,3 0,1,2,3 , 0,1,2,4 , 0,1,3,4 { } { } { } { } {} {} 0,1,2,4 0,1,2,3 , 0,1,2,4 0,1,3,4 { } { } { } {} { } 0,1,3,4 0,1,2,3 , 0,1,3,4 0,1,2,4 { } { } { } {} { } 0,1,2,3,4 0,1,2,3,4 { } { } {} {} 0,1,2,3,4,5 0,1,2,3,4,5 { } {} { } {}
Table 3.8: Orthogonal interchange demonstration. 3.4 : Prime n 103
3.4 Prime n
We now examine patterns exhibited by interpolating systems and orthogonal interpolating systems when n is restricted to be a prime number. To start with an example, Table 3.9 shows the augmented sampling dictionary when n =7. n d frequencies non-orthogonal IS orthogonal IS non-IS 7 1 0 0 { } {} {{ }} {{}} 7 2 0,1 0,1 ; 0,2 ; 0,3 7 2 {0,2} {{0,1};{0,2};{0,3}} {{}} {{}} 7 2 {0,3} {{0,1};{0,2};{0,3}} {{}} {{}} { } {{ } { } { }} {{}} {{}} 7 3 0,1,2 0,1,2 ; 0,1,3 ; 0,1,4 ; 0,2,3 ; 0,2,4 7 3 {0,1,3} {{0,1,2};{0,1,3};{0,1,4};{0,2,3};{0,2,4}} {{}} {{}} 7 3 {0,1,4} {{0,1,2};{0,1,3};{0,1,4};{0,2,3};{0,2,4}} {{}} {{}} 7 3 {0,2,3} {{0,1,2};{0,1,3};{0,1,4};{0,2,3};{0,2,4}} {{}} {{}} 7 3 {0,2,4} {{0,1,2};{0,1,3};{0,1,4};{0,2,3};{0,2,4}} {{}} {{}} { } {{ } { } { } { } { }} {{}} {{}} 7 4 0,1,2,3 0,1,2,3 ; 0,1,2,4 ; 0,1,3,4 ; 0,1,3,5 ; 0,2,3,4 7 4 {0,1,2,4} {{0,1,2,3};{0,1,2,4};{0,1,3,4};{0,1,3,5};{0,2,3,4}} {{}} {{}} 7 4 {0,1,3,4} {{0,1,2,3};{0,1,2,4};{0,1,3,4};{0,1,3,5};{0,2,3,4}} {{}} {{}} 7 4 {0,1,3,5} {{0,1,2,3};{0,1,2,4};{0,1,3,4};{0,1,3,5};{0,2,3,4}} {{}} {{}} 7 4 {0,2,3,4} {{0,1,2,3};{0,1,2,4};{0,1,3,4};{0,1,3,5};{0,2,3,4}} {{}} {{}} { } {{ } { } { } { } { }} {{}} {{}} 7 5 0,1,2,3,4 0,1,2,3,4 ; 0,1,2,3,5 ; 0,1,2,4,5 7 5 {0,1,2,3,5} {{0,1,2,3,4};{0,1,2,3,5};{0,1,2,4,5}} {{}} {{}} 7 5 {0,1,2,4,5} {{0,1,2,3,4};{0,1,2,3,5};{0,1,2,4,5}} {{}} {{}} 7 6 {0,1,2,3,4,5} {{ } {0,1,2,3,4,5} { }} {{}} {{}} { } {{ }} {{}} {{}} 7 7 0,1,2,3,4,5,6 0,1,2,3,4,5,6 { } {} {{ }} {{}}
Table 3.9: Sampling dictionary of bandlimited spaces for n =7.
First observe that there are no non-interpolating systems – any sampling set of size d will reconstruct signals from a bandlimited subspace of dimension d. Secondly, only the trivial subspaces have orthogonal interpolating systems (highlighted in yellow). The following theorems state that all sampling dictionaries must look at this when n is prime. 104 Chapter 3 : Discrete Sampling In Bandlimited Spaces
n: Theorem 3.4.1. Let d = . Then anyset of d samples is an interpolating system for B J |J| when n is prime.
Proof. By Theorem 3.2.1, is an interpolating system for BJ if and only if the square I T submatrix E ∗E is invertible. This invertibility is assured by Chebotarev’s theorem, I F J which says that any minor of the Fourier matrix is nonzero when n is prime. (See Appendix F for a proof of this fact.)
n: n Theorem 3.4.2. When n is prime, the only bandlimited subspaces B J C having ⊆ orthogonal interpolating systems are the trivial subspaces, where d = is either 1 or n. |J|
n Proof. We prove this by way of contradiction. Suppose BJ is a nontrivial subspace of C , and has an OIS . Then 1
1 j 0 d n j ω · = n . So h becomes a Kronecker delta. However, the DFT of h would then be ∈J theP all ones vector, implying that = [n], which contradicts the premise that d < n. J 3.5 : Uniform Sampling 105
3.5 Uniform Sampling
In this final section devoted to bandlimited spaces, we focus on uniform sampling, in which the sampling set is restricted to be an arithmetic progression. I
3.5.1 Discrete Nyquist-Shannon Sampling Theorem
Electrical engineers naturally associate uniform sampling with the Nyquist-Shannon sam- pling theorem. To properly put the results which will follow into perspective, it is helpful to first derive a discrete version of Shannon’s sampling theorem, using the continuous-time proof as a guide. The standard continuous-time proof involves constructing periodic copies of a contigu- ous baseband spectrum in the Fourier domain, and then discarding all copies except for the original one, so that in the end, nothing was done. An identity map is thus created by first convolving the signal with an train of delta functions in the Fourier domain, and then mul- tiplying by a brick-wall filter in the Fourier domain. By invoking the Fourier convolution theorem, we can translate these steps back to the time domain, yielding Shannon’s result. We now replicate these steps in the discrete-time setting of Z/nZ. Firstly, the δ func- tion is replaced by the Kronecker delta δ, which is 1 at the origin, and 0 elsewhere. The Kronecker delta shifted by j units is defined by
1 , m j mod n ≡ δj[m]= 0 , otherwise.
We write 1 for the constant function on Z/nZ whose value is 1:
1 = (1, 1,..., 1). 106 Chapter 3 : Discrete Sampling In Bandlimited Spaces
All signals with domain Z/nZ are assumed to be periodic of period n, so the discrete version of a delta train should be an evenly spaced repetition of Kronecker delta functions δ within one period. Suppose the spacing is p and a divisor of n. Then define
n 1 p − IIIp = δkp . Xk=0 In components,
1, m 0, p, 2p,..., ( n 1)p , mod n ≡ p − IIIp[m]= 0, otherwise.
The δ’s are spaced p samples apart, and there are n/p of them in one period of IIIp. We will need to know the DFT of IIIp. Firstly,
1 j δ = ω− . F j m
Thus, by linearity, n 1 n 1 p − p − kp III = δ = ω− , F p F kp Xk=0 Xk=0 Evaluating this componentwise, a simple calculation yields
n 1 p − n , mp 0 mod n kpm p III [m]= ω− = ≡ p sin πmp|I| F mp mid ( n ) k=0 ω− I πmp , mp 0 mod n sin X ( n ) 6≡ n , m 0 mod n p ≡ p = 0 , m 0 mod n , 6≡ p 3.5 : Uniform Sampling 107
max( )+min( ) where = I I . This result can be written compactly as Imid 2 n III = III . (3.2) F p p n/p
So we have our delta train. Next, the brick-wall filter is represented in discrete-time by a block of consecutive Kronecker deltas. For an index set of consecutive integers modulo I n, say = [i : i ], we set I min max Π = δk . I k X∈I We will also need its DFT. Computing it directly,
k Π = ω− , F I k X∈I and since the integers in are consecutive, modulo n, the right hand side is a geometric I series. At an index m
k km Π [m]= ω− [m]= ω− . F I k k X∈I X∈I and a now familiar calculation gives
, m 0 mod n km ω− = |I| ≡ sin πm|I| m mid ( n ) k ω− I , m 0 mod n , ∈I sin πm X ( n ) 6≡ Since the Fourier transform of a brick wall filter is the sinc function, the DFT we have just computed may be considered a discrete sinc function, or “dinc”. The dinc relative to an 108 Chapter 3 : Discrete Sampling In Bandlimited Spaces
'Dinc' for n=45 and J=@-7:7D Amplitude æ
0.3 æ æ
0.2
æ æ
0.1
æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ Time 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16æ 17æ 18 19 20 21 22æ 23æ 24 25 26 27 28æ 29æ 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 æ æ æ æ æ æ æ æ
Figure 3.22: The discrete sinc, or “dinc”.
index set is defined as I
1 , m 0 mod n dinc [m]= ≡ sin πm|I| I 1 m mid ( n ) ω− I , m 0 mod n . sin πm |I| ( n ) 6≡ Note how the dinc slightly differs from the sinc, having the form sine over sine. Figure 3.22 shows a graph of the dinc. In terms of dinc, we can rewrite the brick wall filter’s DFT pair as Π = dinc , F I |I| I For the inverse transform, by Fourier duality,
1 1 − Π = ( Π )− = |I|dinc− . F I n F I n I 3.5 : Uniform Sampling 109
where the superscripted negative sign indicates reversal:
f −[m]= f[ m]. −
We are now equipped to carry out the discrete analogue of the continuous-time proof.
Theorem 3.5.1 (Discrete Nyquist-Shannon). Let be a contiguous frequency set of size J n: d. If d divides n, with n = d s, then signals from B J can be reconstructed via the · interpolation equation
d 1 − kn f[m]= f[kn/d]dinc− m . J − d Xk=0 (Take d samples, with sampling period s, and use the shifted dincs to reconstruct.)
Proof. Convolving f with III, then multiplying pointwise by Π, has no net effect. Thus, F
f = Π ( f IIIp) . F J F ∗
1 Applying − , F
1 1 f = − f = − (Π ( f IIId)) F F F J F ∗ 1 1 = − Π − ( f IIId) F J ∗F F ∗ 1 1 1 = − Π n( − f)( − IIId) F J ∗ F F F d 1 = dinc− nf IIIn/d n J ∗ d
= dinc− (f IIIn/d). J ∗
Evaluating this equation componentwise yields the interpolation equation. 110 Chapter 3 : Discrete Sampling In Bandlimited Spaces
3.5.2 Generalization of Nyquist-Shannon
Hardware constraints may only allow uniform sampling. Such situations motivate us to consider the dual to the sampling problem, in which we are stuck with a uniform sampling set , and now desire to know which bandlimited spaces have as an interpolating system. I I The discrete Nyquist-Shannon theorem says that uniform sampling can be used to re- construct spaces whose spectrum is a contiguous block. However, many more spectral patterns can be reconstructed with uniform sampling. A hint of this was already suggested by the classic problem from Chapter 1, in which sub-Nyquist uniform sampling can recon- struct a signal whose spectrum consists of two disjoint islands.
Consider the following example: we have a signal of length n = 12, and are constrained to using the uniform sampling set
= 0, 3, 6, 9 . I { }
Applying Theorem 3.5.1, the discrete Nyquist-Shannon theorem asserts that can recon- I 12: struct bandlimited spaces B J with four contiguous frequencies, such as
= 1, 0, 1, 2 . J {− }
However, can actually be used to reconstruct 81 different frequency sets. These are enu- I merated in Table 3.10. The sets have been partitioned along their rotational equivalence classes. The first row of 12 frequency sets are the contiguous spectra predicted by the dis- crete Nyquist-Shannon theorem. However, the remaining rows show discontiguous spectra which can also be reconstructed. The next theorem explains the underlying pattern behind these spectra. We will also explain later how to count and enumerate them. 3.5 : Uniform Sampling 111
0, 1, 2, 3 , 1, 2, 3, 4 , 2, 3, 4, 5 , 3, 4, 5, 6 , 4, 5, 6, 7 , 5, 6, 7, 8 { } { } { } { } { } { } 6, 7, 8, 9 , 7, 8, 9, 10 , 8, 9, 10, 11 , 0, 9, 10, 11 , 0, 1, 10, 11 , 0, 1, 2, 11 { } { } { } { } { } { } 0, 2, 3, 5 , 1, 3, 4, 6 , 2, 4, 5, 7 , 3, 5, 6, 8 , 4, 6, 7, 9 , 5, 7, 8, 10 { } { } { } { } { } { } 6, 8, 9, 11 , 0, 7, 9, 10 , 1, 8, 10, 11 , 0, 2, 9, 11 , 0, 1, 3, 10 , 1, 2, 4, 11 { } { } { } { } { } { } 0, 1, 3, 6 , 1, 2, 4, 7 , 2, 3, 5, 8 , 3, 4, 6, 9 , 4, 5, 7, 10 , 5, 6, 8, 11 { } { } { } { } { } { } 0, 6, 7, 9 , 1, 7, 8, 10 , 2, 8, 9, 11 , 0, 3, 9, 10 , 1, 4, 10, 11 , 0, 2, 5, 11 { } { } { } { } { } { } 0, 3, 5, 6 , 1, 4, 6, 7 , 2, 5, 7, 8 , 3, 6, 8, 9 , 4, 7, 9, 10 , 5, 8, 10, 11 { } { } { } { } { } { } 0, 6, 9, 11 , 0, 1, 7, 10 , 1, 2, 8, 11 , 0, 2, 3, 9 , 1, 3, 4, 10 , 2, 4, 5, 11 { } { } { } { } { } { } 0, 1, 2, 7 , 1, 2, 3, 8 , 2, 3, 4, 9 , 3, 4, 5, 10 , 4, 5, 6, 11 , 0, 5, 6, 7 { } { } { } { } { } { } 1, 6, 7, 8 , 2, 7, 8, 9 , 3, 8, 9, 10 , 4, 9, 10, 11 , 0, 5, 10, 11 , 0, 1, 6, 11 { } { } { } { } { } { } 0, 2, 5, 7 , 1, 3, 6, 8 , 2, 4, 7, 9 , 3, 5, 8, 10 , 4, 6, 9, 11 , 0, 5, 7, 10 { } { } { } { } { } { } 1, 6, 8, 11 , 0, 2, 7, 9 , 1, 3, 8, 10 , 2, 4, 9, 11 , 0, 3, 5, 10 , 1, 4, 6, 11 { } { } { } { } { } { } 0, 1, 6, 7 , 1, 2, 7, 8 , 2, 3, 8, 9 , 3, 4, 9, 10 , 4, 5, 10, 11 , 0, 5, 6, 11 { } { } { } { } { } { } 0, 3, 6, 9 , 1, 4, 7, 10 , 2, 5, 8, 11 { } { } { }
Table 3.10: Spectra reconstructible with = 0, 3, 6, 9 , where n = 12. I { }
Theorem 3.5.2 (Generalized Nyquist-Shannon). Let bea set of d uniformly spaced sam- I ples, with spacing s. Then is an interpolating system for BJ , where := j , j ,...,j , I J { 1 2 d} if and only if the following modular residues are all distinct:
s j mod n, s j mod n, ... s j mod n. · 1 · 2 · d
T Proof. By Theorem 3.2.1, is an interpolating system for BJ ifand onlyif det(E ∗E ) = I I F J 6 112 Chapter 3 : Discrete Sampling In Bandlimited Spaces
T 0. We now build E ∗E . Recall what ∗ looks like: I F J F
11 1 1 . . . 1
2 3 N 1 1 ω ω ω ... ω − 2 4 6 2(N 1) ∗ = 1 ω ω ω ... ω − F . . . . . ...... N 1 2(N 1) 3(N 1) (N 1)(N 1) 1 ω − ω − ω − ... ω − − By Theorem 3.2.3, we may assume without loss of generality that the first entry of is 0, I T so that = 0, s, 2s, 3s,..., (d 1)s . Then E ∗ has the form I { − } I F
11 1 . . . 1
s 2s (n 1)s 1 ω ω ... ω − 2s 4s 2(n 1)s T 1 ω ω ... ω − E ∗ = . I F 3s 6s 3(n 1)s 1 ω ω ... ω − . . . . ...... (d 1)s 2(d 1)s (d 1)(n 1)s 1 ω − ω − ... ω − −
Now multiplying on the right by E , where = j1, j2, j3,...,jd , J J { }
1 1 1 . . . 1
ωj1s ωj2s ωj3s ... ωjds
2j1s 2j2s 2j3s 2jds T ω ω ω ... ω E ∗E = . I F J 3j1s 3j2s 3j3s 3jds ω ω ω ... ω . . . . ...... (d 1)j1s (d 1)j2s (d 1)j3s (d 1)jds ω − ω − ω − ... ω − 3.5 : Uniform Sampling 113
This is a Vandermonde matrix, so its determinant is
T jus jvs det(E ∗E )= ω ω I F J − 1 u