On the Realizability of the Critical Points of a Realizable List

Total Page:16

File Type:pdf, Size:1020Kb

On the Realizability of the Critical Points of a Realizable List On the realizability of the critical points of a realizable list✩ a,1 b,1 c,1, Sarah L. Hoover , Daniel A. McCormick , Pietro Paparella ∗, Amber R. Thrallc,1 aHamilton College, 198 College Hill Rd., Clinton, NY, 13323, USA bFlorida Institute of Technology, 150 W. University Blvd., Melbourne, FL, 32901, USA cDivision of Engineering and Mathematics, University of Washington Bothell, 18115 Campus Way NE, Bothell, WA 98011, USA Abstract The nonnegative inverse eigenvalue problem (NIEP) is to characterize the spec- tra of entrywise nonnegative matrices. A finite multiset of complex numbers is called realizable if it is the spectrum of an entrywise nonnegative matrix. Monov conjectured that the kth-moments of the list of critical points of a real- izable list are nonnegative. Johnson further conjectured that the list of critical points must be realizable. In this work, Johnson’s conjecture, and consequently Monov’s conjecture, is established for a variety of important cases including Ciarlet spectra, Sule˘ımanova spectra, spectra realizable via companion matri- ces, and spectra realizable via similarity by a complex Hadamard matrix. Ad- ditionally we prove a result on differentiators and trace vectors, and use it to provide an alternate proof of a result due to Malamud and a generalization of a result due to Kushel and Tyaglov on circulant matrices. Implications for further research are discussed. Keywords: Nonnegative inverse eigenvalue problem, Sule˘ımanova spectrum, differentiator, trace vector, complex Hadamard matrix 2010 MSC: 15A29, 15A18, 15B48, 30C15 1. Introduction arXiv:1712.05454v1 [math.SP] 14 Dec 2017 The longstanding nonnegative inverse eigenvalue problem (NIEP) is to char- acterize the spectra of entrywise nonnegative matrices. More specifically, given a finite multi-set (herein list)Λ= λ ,...,λn of complex numbers, the NIEP { 1 } ✩This work was supported by NSF Award DMS-1460699. ∗Corresponding author. Email addresses: [email protected] (Sarah L. Hoover), [email protected] (Daniel A. McCormick), [email protected] (Pietro Paparella), [email protected] (Amber R. Thrall) URL: http://faculty.washington.edu/pietrop/ (Pietro Paparella), http://amber.thrall.me/ (Amber R. Thrall) Preprint submitted to Linear Algebra and its Applications September 18, 2018 asks for necessary and sufficient conditions such that Λ is the spectrum of an n-by-n entrywise-nonnegative matrix. If Λ is the spectrum of a nonnegative matrix A, then Λ is called realizable and A is a called realizing matrix (for Λ). It is well-known that if Λ = λ1,...,λn is realizable, then Λ must satisfy the following conditions: { } Λ := λ¯ ,..., λ¯n = Λ; (1) { 1 } ρ(Λ) := max λi Λ; (2) 1 i n ≤ ≤ {| |} ∈ n k sk(Λ) := λ 0, k N; and (3) i ≥ ∀ ∈ i=1 m Xm 1 s (Λ) n − skm(Λ), k,m N. (4) k ≤ ∀ ∈ It is worth noting that these conditions are not independent: Loewy and London [24] showed that the moment condition (3) implies the self-conjugacy condition (1); Friedland [12, Theorem 1] showed that the eventual nonnegativity of the moments implies the spectral radius condition (2); and if the trace is nonnega- tive, i.e., if s1(Λ) 0, then the J-LL condition (4) (established independently by Johnson [15] and≥ by Loewy and London [24]) implies the moment condition since 1 k sk(Λ) k 1 s1 (Λ) 0. ≥ n − ≥ Thus, the J-LL condition and the nonnegativity of the trace imply the self- conjugacy, spectral radius, and moment conditions. There is another necessary condition due to Holtz [13], which was shown to be independent of J-LL, but for brevity, it will not be considered here (for more information regarding the NIEP, there are several articles [2, 6, 11, 17, 26, 32] and monographs [26, 1] that discuss the problem). n Let Λ = λ ,...,λn C (n 2) be a list, let p(z) = (z λi), and { 1 } ⊂ ≥ i=1 − let Λ′ = µ1,...,µn 1 denote the zeros of p′. Monov [27] posed the following conjecture.{ − } Q Conjecture 1.1 (Monov). If Λ is realizable, then sk(Λ′) 0 for all k N. ≥ ∈ The following is due to Johnson [16]. Conjecture 1.2 (Johnson). If Λ is realizable, then Λ′ is realizable. It is clear that Johnson’s conjecture implies Monov’s conjecture. Cronin and Laffey [8] established Johnson’s conjecture in the cases when n 4, or when ≤ n 6 and s1(Λ) = 0, . ≤Note that the converse of Conjecture 1.2 is not true; consider the list Λ = 1, 1, 2/3, 2/3, 2/3 and the polynomial p(z) = (z 1)2(z +2/3)3. Then { − 4 − 2 − } − p′(z)=5z 5z 20/27z+20/27,Λ′ = 1, 1/3, 2/3, 2/3 ,andΛ′ is realizable [24]. However,− it− is well-known that Λ is{ not realizable− − (see,} e.g., Friedland [12] and Paparella and Taylor [28] for a more general result). Note that there are instances when the constant of integration can be selected so that the zeros of an antiderivative form a realizable set (see Section 6). 2 In this work, we investigate and constructively prove Conjecture 1.2 for a variety of cases. First, we examine the kth moments of the critical points of a realizable list and establish a sufficient condition for realizability via companion matrices. Next, we explore the d-companion matrix as a construction for a realizing matrix of the critical points and establish a sufficient condition that includes Ciarlet spectra [7]. We then study differentiators following the work of Pereira [29] and show that Johnson’s conjecture holds for spectra realizable via complex Hadamard similarities. Along the way, we provide an alternate proof to a result of Malamud [25] and extend the results on circulant matrices by Kushel and Tyaglov [20]. Additionally, we use these results to provide a new proof of a classical theorem on the interlacing roots and critical points for polynomials. Finally, implications for further research are discussed. 2. Notation n Unless otherwise stated, Λ := λ ,...,λn C (n 2), p(t) := (t { 1 } ⊂ ≥ i=1 − λi) C[t], and Λ′ := µ C p′(µ)=0 = µ1,...,µn 1 . Furthermore, we ∈ { ∈ | } { − } Q refer to the critical points of p, i.e., the zeros of p′, as the critical points of Λ. For n N, we let n denote the set 1,...,n . ∈ h i { } We let Mm n(F) denote the set of m-by-n matrices with entries from a field × F. In the case when m = n, Mm n(F) is abbreviated to Mn(F). × For A Mn(F), we let σ(A) denote the spectrum (i.e., list of eigenvalues) of A; we let ∈A denote the ith principal submatrix of A, i.e., A is the (n 1)- (i) (i) − by-(n 1) matrix obtained by deleting the ith-row and ith-column of A; and we let τ(A−) denote the normalized trace of A, i.e., τ(A) := (1/n) tr A. th We let diag (λ1,...,λn) denote the diagonal matrix whose i diagonal entry is λi. For A Mm(F) and B Mn(F), the direct sum of A and B, denoted A B, is the matrix∈ ∈ ⊕ A 0 Mm n(F). 0 B ∈ × The Kronecker product of A Mm n(F) and B Mp q(F), denoted A B, is the (mp)-by-(nq) block matrix∈ × ∈ × ⊗ a11B ... a1nB . . .. . a B ... a B m1 mn 3. Preliminary Results Proposition 3.1. If Λ is realizable, then Λ′ = Λ′. Proof. Follows from the fact that the derivative of a real polynomial is real. Proposition 3.2. If Λ is realizable, then s (Λ′) 0. 1 ≥ 3 Proof. Since 1 1 s (Λ) = s (Λ′), n 1 n 1 1 − it follows that n 1 s (Λ′)= − s (Λ) 0. 1 n 1 ≥ Monov [27, Proposition 2.3] provides a formula that relates the moments of a list with the moments of its critical points. Theorem 3.3 (Monov’s moment formula). If s1(Λ) n 0 ... 0 2s (Λ) s (Λ) n . 0 2 1 . dk(Λ) := . , (k 1)sk 1(Λ) sk 2(Λ) sk 3(Λ) ... n − − − − ksk(Λ) sk 1(Λ) sk 2(Λ) ... s1(Λ) − − then 1 k sk(Λ′)= sk(Λ) + dk(Λ). (5) −n Corollary 3.4. If Λ is realizable, then s (Λ′) 0. 2 ≥ Proof. A straightforward application of Theorem 3.3 yields n 2 1 2 s (Λ′)= − s (Λ) + s (Λ) 0. 2 n 2 n2 1 ≥ With this we see a particular case of the J-LL condition satisfied. Corollary 3.5. If Λ is a list, then s2(Λ) ns (Λ) (6) 1 ≤ 2 if and only if 2 s (Λ′) (n 1)s (Λ′). 1 ≤ − 2 Proof. Following (5), n 2 n 2 s2(Λ) ns (Λ) 0 − s (Λ) − s2(Λ) 1 ≤ 2 ⇐⇒ ≤ n 2 − n2 1 n 2 1 1 n 2 0 − s (Λ) + s2(Λ) s2(Λ) − s2(Λ) ⇐⇒ ≤ n 2 n2 1 − n2 1 − n2 1 n 2 1 n 1 0 − s (Λ) + s2(Λ) − s2(Λ) ⇐⇒ ≤ n 2 n2 1 − n2 1 n 2 1 n 1 2 0 (n 1) − s (Λ) + s2(Λ) − s (Λ) ⇐⇒ ≤ − n 2 n2 1 − n 1 2 0 (n 1)s (Λ′) s (Λ′) ⇐⇒ ≤ − 2 − 1 2 s (Λ′) (n 1)s (Λ′). ⇐⇒ 1 ≤ − 2 4 Inequality (6) is important for the realizability of lists with three elements [24, Theorem 2], as well as generalized Sule˘ımanova spectra [21], which we define in the sequel. A real list Λ is called a Sule˘ımanova spectrum if s1(Λ) 0 and Λ contains exactly one nonnegative entry.
Recommended publications
  • Arxiv:2105.00793V3 [Math.NA] 14 Jun 2021 Tubal Matrices
    Tubal Matrices Liqun Qi∗ and ZiyanLuo† June 15, 2021 Abstract It was shown recently that the f-diagonal tensor in the T-SVD factorization must satisfy some special properties. Such f-diagonal tensors are called s-diagonal tensors. In this paper, we show that such a discussion can be extended to any real invertible linear transformation. We show that two Eckart-Young like theo- rems hold for a third order real tensor, under any doubly real-preserving unitary transformation. The normalized Discrete Fourier Transformation (DFT) matrix, an arbitrary orthogonal matrix, the product of the normalized DFT matrix and an arbitrary orthogonal matrix are examples of doubly real-preserving unitary transformations. We use tubal matrices as a tool for our study. We feel that the tubal matrix language makes this approach more natural. Key words. Tubal matrix, tensor, T-SVD factorization, tubal rank, B-rank, Eckart-Young like theorems AMS subject classifications. 15A69, 15A18 1 Introduction arXiv:2105.00793v3 [math.NA] 14 Jun 2021 Tensor decompositions have wide applications in engineering and data science [11]. The most popular tensor decompositions include CP decomposition and Tucker decompo- sition as well as tensor train decomposition [11, 3, 17]. The tensor-tensor product (t-product) approach, developed by Kilmer, Martin, Bra- man and others [10, 1, 9, 8], is somewhat different. They defined T-product opera- tion such that a third order tensor can be regarded as a linear operator applied on ∗Department of Applied Mathematics, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong, China; ([email protected]). †Department of Mathematics, Beijing Jiaotong University, Beijing 100044, China.
    [Show full text]
  • Fourier Transform, Convolution Theorem, and Linear Dynamical Systems April 28, 2016
    Mathematical Tools for Neuroscience (NEU 314) Princeton University, Spring 2016 Jonathan Pillow Lecture 23: Fourier Transform, Convolution Theorem, and Linear Dynamical Systems April 28, 2016. Discrete Fourier Transform (DFT) We will focus on the discrete Fourier transform, which applies to discretely sampled signals (i.e., vectors). Linear algebra provides a simple way to think about the Fourier transform: it is simply a change of basis, specifically a mapping from the time domain to a representation in terms of a weighted combination of sinusoids of different frequencies. The discrete Fourier transform is therefore equiv- alent to multiplying by an orthogonal (or \unitary", which is the same concept when the entries are complex-valued) matrix1. For a vector of length N, the matrix that performs the DFT (i.e., that maps it to a basis of sinusoids) is an N × N matrix. The k'th row of this matrix is given by exp(−2πikt), for k 2 [0; :::; N − 1] (where we assume indexing starts at 0 instead of 1), and t is a row vector t=0:N-1;. Recall that exp(iθ) = cos(θ) + i sin(θ), so this gives us a compact way to represent the signal with a linear superposition of sines and cosines. The first row of the DFT matrix is all ones (since exp(0) = 1), and so the first element of the DFT corresponds to the sum of the elements of the signal. It is often known as the \DC component". The next row is a complex sinusoid that completes one cycle over the length of the signal, and each subsequent row has a frequency that is an integer multiple of this \fundamental" frequency.
    [Show full text]
  • The Discrete Fourier Transform
    Tutorial 2 - Learning about the Discrete Fourier Transform This tutorial will be about the Discrete Fourier Transform basis, or the DFT basis in short. What is a basis? If we google define `basis', we get: \the underlying support or foundation for an idea, argument, or process". In mathematics, a basis is similar. It is an underlying structure of how we look at something. It is similar to a coordinate system, where we can choose to describe a sphere in either the Cartesian system, the cylindrical system, or the spherical system. They will all describe the same thing, but in different ways. And the reason why we have different systems is because doing things in specific basis is easier than in others. For exam- ple, calculating the volume of a sphere is very hard in the Cartesian system, but easy in the spherical system. When working with discrete signals, we can treat each consecutive element of the vector of values as a consecutive measurement. This is the most obvious basis to look at a signal. Where if we have the vector [1, 2, 3, 4], then at time 0 the value was 1, at the next sampling time the value was 2, and so on, giving us a ramp signal. This is called a time series vector. However, there are also other basis for representing discrete signals, and one of the most useful of these is to use the DFT of the original vector, and to express our data not by the individual values of the data, but by the summation of different frequencies of sinusoids, which make up the data.
    [Show full text]
  • Quantum Fourier Transform Revisited
    Quantum Fourier Transform Revisited Daan Camps1,∗, Roel Van Beeumen1, Chao Yang1, 1Computational Research Division, Lawrence Berkeley National Laboratory, CA, United States Abstract The fast Fourier transform (FFT) is one of the most successful numerical algorithms of the 20th century and has found numerous applications in many branches of computational science and engineering. The FFT algorithm can be derived from a particular matrix decomposition of the discrete Fourier transform (DFT) matrix. In this paper, we show that the quantum Fourier transform (QFT) can be derived by further decomposing the diagonal factors of the FFT matrix decomposition into products of matrices with Kronecker product structure. We analyze the implication of this Kronecker product structure on the discrete Fourier transform of rank-1 tensors on a classical computer. We also explain why such a structure can take advantage of an important quantum computer feature that enables the QFT algorithm to attain an exponential speedup on a quantum computer over the FFT algorithm on a classical computer. Further, the connection between the matrix decomposition of the DFT matrix and a quantum circuit is made. We also discuss a natural extension of a radix-2 QFT decomposition to a radix-d QFT decomposition. No prior knowledge of quantum computing is required to understand what is presented in this paper. Yet, we believe this paper may help readers to gain some rudimentary understanding of the nature of quantum computing from a matrix computation point of view. 1 Introduction The fast Fourier transform (FFT) [3] is a widely celebrated algorithmic innovation of the 20th century [19]. The algorithm allows us to perform a discrete Fourier transform (DFT) of a vector of size N in (N log N) O operations.
    [Show full text]
  • Circulant Matrix Constructed by the Elements of One of the Signals and a Vector Constructed by the Elements of the Other Signal
    Digital Image Processing Filtering in the Frequency Domain (Circulant Matrices and Convolution) Christophoros Nikou [email protected] University of Ioannina - Department of Computer Science and Engineering 2 Toeplitz matrices • Elements with constant value along the main diagonal and sub-diagonals. • For a NxN matrix, its elements are determined by a (2N-1)-length sequence tn | (N 1) n N 1 T(,)m n t mn t0 t 1 t 2 t(N 1) t t t 1 0 1 T tt22 t1 t t t t N 1 2 1 0 NN C. Nikou – Digital Image Processing (E12) 3 Toeplitz matrices (cont.) • Each row (column) is generated by a shift of the previous row (column). − The last element disappears. − A new element appears. T(,)m n t mn t0 t 1 t 2 t(N 1) t t t 1 0 1 T tt22 t1 t t t t N 1 2 1 0 NN C. Nikou – Digital Image Processing (E12) 4 Circulant matrices • Elements with constant value along the main diagonal and sub-diagonals. • For a NxN matrix, its elements are determined by a N-length sequence cn | 01nN C(,)m n c(m n )mod N c0 cNN 1 c 2 c1 c c c 1 01N C c21 c c02 cN cN 1 c c c c NN1 21 0 NN C. Nikou – Digital Image Processing (E12) 5 Circulant matrices (cont.) • Special case of a Toeplitz matrix. • Each row (column) is generated by a circular shift (modulo N) of the previous row (column). C(,)m n c(m n )mod N c0 cNN 1 c 2 c1 c c c 1 01N C c21 c c02 cN cN 1 c c c c NN1 21 0 NN C.
    [Show full text]
  • Pre- and Post-Processing for Optimal Noise Reduction in Cyclic Prefix
    Pre- and Post-Processing for Optimal Noise Reduction in Cyclic Prefix Based Channel Equalizers Bojan Vrcelj and P. P. Vaidyanathan Dept. of Electrical Engineering 136-93 California Institute of Technology Pasadena, CA 91125-0001 Abstract— Cyclic prefix based equalizers are widely used for It is preceded (followed) by the optimal precoder (equalizer) high-speed data transmission over frequency selective channels. for the given input and noise statistics. These blocks are real- Their use in conjunction with DFT filterbanks is especially attrac- ized by constant matrix multiplication, so that the overall com- tive, given the low complexity of implementation. Some examples munications system remains of low complexity. include the DFT-based DMT systems. In this paper we consider In the following we first give a brief overview of the cyclic a general cyclic prefix based system for communication and show prefix system with DFT matrices used as the basic ISI can- that the equalization performance can be improved by simple pre- celer. Then, we introduce a way to deal with noise suppres- and post-processing aimed at reducing the noise at the receiver. This processing is done independently of the ISI cancellation per- sion separately and derive the optimal constrained pair pre- formed by the frequency domain equalizer.1 coder/equalizer for this purpose. The constraint is that in the absence of noise the overall system is still ISI-free. The per- formance of the proposed method is evaluated through com- I. INTRODUCTION puter simulations and a significant improvement with respect to There has been considerable interest in applying the eq– the original system without pre- and post-processing is demon- ualization techniques based on cyclic prefix to high speed data strated.
    [Show full text]
  • Spectral Analysis of the Adjacency Matrix of Random Geometric Graphs
    Spectral Analysis of the Adjacency Matrix of Random Geometric Graphs Mounia Hamidouche?, Laura Cottatellucciy, Konstantin Avrachenkov ? Departement of Communication Systems, EURECOM, Campus SophiaTech, 06410 Biot, France y Department of Electrical, Electronics, and Communication Engineering, FAU, 51098 Erlangen, Germany Inria, 2004 Route des Lucioles, 06902 Valbonne, France [email protected], [email protected], [email protected]. Abstract—In this article, we analyze the limiting eigen- multivariate statistics of high-dimensional data. In this case, value distribution (LED) of random geometric graphs the coordinates of the nodes can represent the attributes of (RGGs). The RGG is constructed by uniformly distribut- the data. Then, the metric imposed by the RGG depicts the ing n nodes on the d-dimensional torus Td ≡ [0; 1]d and similarity between the data. connecting two nodes if their `p-distance, p 2 [1; 1] is at In this work, the RGG is constructed by considering a most rn. In particular, we study the LED of the adjacency finite set Xn of n nodes, x1; :::; xn; distributed uniformly and matrix of RGGs in the connectivity regime, in which independently on the d-dimensional torus Td ≡ [0; 1]d. We the average vertex degree scales as log (n) or faster, i.e., choose a torus instead of a cube in order to avoid boundary Ω (log(n)). In the connectivity regime and under some effects. Given a geographical distance, rn > 0, we form conditions on the radius rn, we show that the LED of a graph by connecting two nodes xi; xj 2 Xn if their `p- the adjacency matrix of RGGs converges to the LED of distance, p 2 [1; 1] is at most rn, i.e., kxi − xjkp ≤ rn, the adjacency matrix of a deterministic geometric graph where k:kp is the `p-metric defined as (DGG) with nodes in a grid as n goes to infinity.
    [Show full text]
  • 1 1.1. the DFT Matrix
    FFT January 20, 2016 1 1.1. The DFT matrix. The DFT matrix. By definition, the sequence f(τ)(τ = 0; 1; 2;:::;N − 1), posesses a discrete Fourier transform F (ν)(ν = 0; 1; 2;:::;N − 1), given by − 1 NX1 F (ν) = f(τ)e−i2π(ν=N)τ : (1.1) N τ=0 Of course, this definition can be immediately rewritten in the matrix form as follows 2 3 2 3 F (1) f(1) 6 7 6 7 6 7 6 7 6 F (2) 7 1 6 f(2) 7 6 . 7 = p F 6 . 7 ; (1.2) 4 . 5 N 4 . 5 F (N − 1) f(N − 1) where the DFT (i.e., the discrete Fourier transform) matrix is defined by 2 3 1 1 1 1 · 1 6 − 7 6 1 w w2 w3 ··· wN 1 7 6 7 h i 6 1 w2 w4 w6 ··· w2(N−1) 7 1 − − 1 6 7 F = p w(k 1)(j 1) = p 6 3 6 9 ··· 3(N−1) 7 N 1≤k;j≤N N 6 1 w w w w 7 6 . 7 4 . 5 1 wN−1 w2(N−1) w3(N−1) ··· w(N−1)(N−1) (1.3) 2πi with w = e N being the primitive N-th root of unity. 1.2. The IDFT matrix. To recover N values of the function from its discrete Fourier transform we simply have to invert the DFT matrix to obtain 2 3 2 3 f(1) F (1) 6 7 6 7 6 f(2) 7 p 6 F (2) 7 6 7 −1 6 7 6 .
    [Show full text]
  • Exact Recovery of Tensor Robust Principal Component Analysis Under Linear Transforms
    1 Exact Recovery of Tensor Robust Principal Component Analysis under Linear Transforms Canyi Lu and Pan Zhou This work studies the Tensor Robust Principal Component low-rank and sparse tensors decomposition from their sum. Analysis (TRPCA) problem, which aims to exactly recover More specifically, assume that a tensor X can be decomposed the low-rank and sparse components from their sum. Our as X = L + E where L is a low-rank tensor and E is model is motivated by the recently proposed linear transforms 0 0 0 0 based tensor-tensor product and tensor SVD. We define a new a sparse tensor. TRPCA aims to recover L0 and E0 from X . transforms depended tensor rank and the corresponding tensor We focus on the convex model which can be solved exactly nuclear norm. Then we solve the TRPCA problem by convex and efficiently, and the solutions own the theoretical guarantee. optimization whose objective is a weighted combination of the TRPCA extends the well known Robust PCA [1] model, i.e., new tensor nuclear norm and the `1-norm. In theory, we show that under certain incoherence conditions, the convex program min kLk∗ + λkEk1; s.t. X = L + E; (1) exactly recovers the underlying low-rank and sparse components. L;E It is of great interest that our new TRPCA model generalizes existing works. In particular, if the studied tensor reduces to where kLk∗ denotes the matrix nuclear norm, kEk1 denotes a matrix, our TRPCA model reduces to the known matrix the `1-norm and λ > 0. Under certain incoherence conditions, RPCA [1].
    [Show full text]
  • Basic Vector Space Methods in Signal and Systems Theory
    Basic Vector Space Methods in Signal and Systems Theory By: C. Sidney Burrus Basic Vector Space Methods in Signal and Systems Theory By: C. Sidney Burrus Online: < http://cnx.org/content/col10636/1.5/ > CONNEXIONS Rice University, Houston, Texas This selection and arrangement of content as a collection is copyrighted by C. Sidney Burrus. It is licensed under the Creative Commons Attribution 3.0 license (http://creativecommons.org/licenses/by/3.0/). Collection structure revised: December 19, 2012 PDF generated: January 10, 2013 For copyright and attribution information for the modules contained in this collection, see p. 35. Table of Contents 1 Introduction .......................................................................................1 2 A Matrix Times a Vector ..................................................... ...................3 3 General Solutions of Simultaneous Equations .................................................11 4 Approximation with Other Norms and Error Measures ......................................19 5 Constructing the Operator (Design) ...........................................................27 Bibliography ........................................................................................29 Index ................................................................................................34 Attributions .........................................................................................35 iv Available for free at Connexions <http://cnx.org/content/col10636/1.5> Chapter 1 Introduction1 1.1
    [Show full text]
  • Cayley Graphs and the Discrete Fourier Transform
    Cayley Graphs and the Discrete Fourier Transform Alan Mackey Advisor: Klaus Lux May 15, 2008 Abstract Given a group G, we can construct a graph relating the elements of G to each other, called the Cayley graph. Using Fourier analysis on a group allows us to apply our knowledge of the group to gain insight into problems about various properties of the graph. Ideas from representation theory are powerful tools for analysis of groups and their Cayley graphs, but we start with some simpler theory to motivate the use of group representations. 1 Z/nZ and L2(Z=nZ) Before we can define what the discrete Fourier transform (or DFT) is, we have to know what exactly it transforms. To introduce the ideas behind the use of the DFT, we will use G = Z/nZ as an example. The DFT doesn't actually operate on the elements of Z/nZ themselves, but rather on certain functions with Z/nZ as their domain. Definition. The set ff : G ! Cg of all functions from a finite group G to the complex numbers is denoted L2(G). Let's start with an example. The group (Z/nZ; +n) has n elements, and so any function from Z/nZ to C is determined exactly by the finite set of values ff(a): a 2 Z/nZg. Thus, we can define a vector space isomorphism φ : L2(Z/nZ) ! Cn by 1 φ(f) = (f(0); f(1); : : : ; f(n − 1)): From here, it isn't hard to see that L2(Z=nZ)) is an n-dimensional vector space over C.
    [Show full text]
  • COMPARISON on DIFFERENT DISCRETE FRACTIONAL FOURIER TRANSFORM (DFRFT) APPROACHES Ishwor Bhatta
    University of New Mexico UNM Digital Repository Electrical and Computer Engineering ETDs Engineering ETDs 1-28-2015 COMPARISON ON DIFFERENT DISCRETE FRACTIONAL FOURIER TRANSFORM (DFRFT) APPROACHES Ishwor Bhatta Follow this and additional works at: https://digitalrepository.unm.edu/ece_etds Recommended Citation Bhatta, Ishwor. "COMPARISON ON DIFFERENT DISCRETE FRACTIONAL FOURIER TRANSFORM (DFRFT) APPROACHES." (2015). https://digitalrepository.unm.edu/ece_etds/32 This Thesis is brought to you for free and open access by the Engineering ETDs at UNM Digital Repository. It has been accepted for inclusion in Electrical and Computer Engineering ETDs by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected]. Candidate Department This thesis is approved, and it is acceptable in quality and form for publication: Approved by the Thesis Committee: , Chairperson by THESIS Submitted in Partial Fulfillment of the Requirements for the Degree of The University of New Mexico Albuquerque, New Mexico Dedication I dedicate this work to my parents for their support and encouragement. ’I am a slow walker, but I never walk back’ – Abraham Lincoln iii Acknowledgments First of all, I heartily appreciate Dr. Balu Santhanam, my advisor and thesis chair, for continuing to encourage me through the years of classroom teachings and the long number of months writing and rewriting these chapters. I hope to emmulate his professional style as I continue my career. I also want to thank the members of my thesis committee, Dr. Maria Cristina Pereyra and Dr. Majeed Hayat. I want to thank Dr. Mark Gilmore, Ex-director of the ECE Graduate Program and Dr. Sudharman Jayaweera, Director of the ECE Graduate Program, for ap- pointing me as a Project Assistant at the Electrical and Computer Engineering Department for several semesters.
    [Show full text]