Characterization of Discrete Linear Shift-Invariant Systems

Total Page:16

File Type:pdf, Size:1020Kb

Characterization of Discrete Linear Shift-Invariant Systems 2017 25th European Signal Processing Conference (EUSIPCO) Characterization of Discrete Linear Shift-Invariant Systems Michael Clausen and Frank Kurth Fraunhofer FKIE Communication Systems 53343 Wachtberg Germany Email: [email protected], [email protected] Abstract—Linear time-invariant (LTI) systems are of funda- i.e., C(A) = C[A]. Note that in this case, the centralizer is a mental importance in classical digital signal processing. LTI sys- commutative algebra. If however A is the N × N unit matrix, tems are linear operators commuting with the time-shift operator. then C(A) = CN×N , which is non-commutative for N ≥ 2. For N-periodic discrete time series the time-shift operator is a circulant N × N permutation matrix. Sandryhaila and Moura Thus centralizers can be non-commutative. In fact, we prove developed a linear discrete signal processing framework and in Section IV, that the centralizer of A is commutative if and corresponding tools for datasets arising from social, biological, only if all eigenspaces of A are one-dimensional. Thus if we and physical networks. In their framework, the circulant permu- are in the degenerate case, then not all LSI systems w.r.t. A N ×N A tation matrix is replaced by a network-specific matrix , are of the form h(A), for a polynomial h. Hence the argument called a shift matrix, and the linear shift-invariant (LSI) systems are all N × N matrices H over C commuting with the shift [1] (p. 1647, following Theorem 2) reducing graph filters for matrix: HA = AH. Sandryhaila and Moura described all those the degenerate case to filters in the non-degenerate case does H for the non-degenerate case, in which all eigenspaces of A not hold. are one-dimensional. Then the authors reduced the degenerate In this paper, we give a complete description of LSI sys- case to the non-degenerate one. As we show in this paper this tems by describing the centralizer of arbitrary complex-valued reduction does, however, not generally hold, leaving open one gap in the proposed argument. In this paper we are able to N × N matrices A. We start in Section II with two examples close this gap and propose a complete characterization of all illustrating the degenerate case. To keep this paper to some (i.e., degenerate and non-degenerate) LSI systems. Finally, we extent self-contained, we introduce in Section III notation describe the corresponding spectral decompositions. required in the following and discuss basic tools: associative algebras and their morphisms, minimal and characteristic poly- I. INTRODUCTION nomials, upper triangular Toeplitz matrices, Jordan canonical Linear time-invariant (LTI) systems are linear operators form, and Hermite interpolation. After compiling these tools, commuting with the time-shift operator. Such systems are of we present in Section IV the characterization of centralizers. fundamental importance in classical digital signal processing. Finally, in Section V we describe the spectral decompositions For N-periodic discrete time series the time-shift operator is of the signal space CN w.r.t. C[A] and C(A). the circulant permutation matrix corresponding to the cyclic II. ILLUSTRATING THE DEGENERATE CASE permutation (0, 1,...,N − 1). In [1], Sandryhaila and Moura There are a number of network-specific matrices A with developed a linear discrete signal processing framework and repeated eigenvalues. Here are two examples. corresponding tools for datasets arising from social, biologi- cal, and physical networks. In their framework, the circulant Example 1. Figure 1 shows the Petersen graph. permutation matrix is replaced by a network-specific matrix bc CN×N bc A ∈ , the shift matrix, and the linear shift-invariant bc bc bc bc (LSI) systems are all matrices in the centralizer of A: bc bc C(A) := {H ∈ CN×N | HA = AH}. bc bc Fig. 1. Peterson graph This centralizer has an additional algebraic structure, briefly N×N reviewed in Subsection III-A: C(A) is a subalgebra of C . The characteristic polynomial pA(X) := det(XI − A) of its Moreover, C(A) contains the algebra C[A] of all polynomial adjacency matrix A equals (X − 1)5 · (X + 2)4 · (X − 3). N×N expressions in A. Thus C[A] ⊆ C(A) ⊆ C . Thus λ1 = 1 is a 5-fold eigenvalue, whereas λ2 = −2 is a 4- In [1], the authors provide an explicit description of the fold eigenvalue of A. A is symmetric, hence diagonalizable. centralizer of A in the non-degenerate case in which the Thus the dimension of C[A] equals the number of distinct eigenspace of every eigenvalue of A is one-dimensional. More eigenvalues of A, which is 3, whereas the centralizer C(A) is precisely, they showed that in this case the centralizer of A isomorphic to the non-commutative algebra C5×5 ⊕ C4×4 ⊕ coincides with the algebra of all polynomial expressions in A, C1×1, which is of dimension 42. ♦ ISBN 978-0-9928626-7-1 © EURASIP 2017 366 2017 25th European Signal Processing Conference (EUSIPCO) Example 2. As a second example we consider the adjacency B. Upper Triangular Toeplitz Matrices matrix A of a clique with nodes. Here, m×n = (ai,j ) N ai,i = 0 Recall that a matrix T = (ti,j ) ∈ C is a Toeplitz matrix, and ai,j = 1, for all i and all j 6= i. Figure 2 illustrates the if each descending diagonal from left to right is constant, i.e., case N = 6. ti,j = ti+1,j+1, whenever both sides are defined. Thus every m × n Toeplitz matrix is completely specified by the entries in its leftmost column and its topmost row. We are mainly interested in upper triangular Toeplitz matrices. Such matrices are specified by its topmost row. For a = (a0, . , ad−1) ∈ Fig. 2. Clique for N = 6. d d×d C define T d(a) = T d(a0, . , ad−1) = (ti,j ) ∈ C by t := 0, if i > j, and t := a , if i ≤ j. Thus A is a circulant matrix. (Recall that an N × N matrix of i,j i,j j−i the form (aj−i mod N )i,j is called circulant.) It is well- a0 a1 a2 . ad−2 ad−1 pq a0 a1 a2 ad−2 known that the N × N DFT matrix F = (ω )0≤p,q<N , .. .. .. ω := exp(2πi/N), performs a simultaneous diagonalization of T (a) = . d all circulant N × N matrices A. More precisely, if a denotes a0 a1 a2 a0 a1 the leftmost column of A, then a0 −1 ∆ F AF = diag(F a) =: . Td denotes the set of all d × d upper triangular Toeplitz In our case, a = (0, 1,..., 1)⊤ ∈ CN . A straightforward matrices. calculation shows that F a ⊤. In = (N − 1, −1,..., −1) Lemma 3. Td is a commutative C-algebra. More precisely, particular, is a simple eigenvalue, whereas is an d N − 1 −1 if ej denotes the jth unit vector in C , then the matrices (N − 1)-fold eigenvalue of A. Let’s compare C[∆] ≃ C[A] T d(e0),..., T d(ed−1) are a basis of Td and for 0 ≤ i, j < d, and C(∆) ≃ C(A) for ∆ := diag(N − 1, −1,..., −1). If C ∆ T d(ei+j ) if i + j < d, h ∈ [X], then h( ) = diag(h(N − 1), h(−1), . , h(−1)). T d(ei) · T d(ej) = This shows that C[∆] and hence C[A] is 2-dimensional, 0 otherwise. C ∆ d in contrast to the centralizer ( ) which is equal to the In general, if a, b, c ∈ C and T d(a) · T d(b) = T d(c), then space C1×1 ⊕ C(N−1)×(N−1). Thus the centralizer C(A) is k ck = j=0 aj bk−j, for k ∈ [0, d − 1]. of dimension 1 + (N − 1)2. ♦ ProofP. Follows by a straightforward computation. Both examples show that in the degenerate case the central- Thus the multiplication of upper triangular Toeplitz matrices izer of A can be much larger than the space of all polynomial in T boils down to truncated polynomial convolution. Via expressions in A. d FFT, this multiplication can be performed with O(d log d) III. PRELIMINARIES arithmetic operations. Below, we need the following gener- A. Review of Associative Algebras alization of Td. An associative C-algebra A is a vector space over C with Definition 4. For positive integers d, e let Td,e denote the an associative multiplication and a unit element 1A. Moreover, set of all d × e matrices T = (ti,j )0≤i<d,0≤j<e, such that addition, scalar multiplication, and multiplication must be ti,j = ti+1,j+1, whenever both sides are defined. Moreover, compatible. The space ti,j is zero if j − i < max(0, e − d). n j Note that T = T . C[X] := hjX | n ≥ 0; h0, . , hn ∈ C d,d d nXj=0 o Example 5. Let a0, . , a4 ∈ C. Then typical elements in T3,5 N×N of all univariate polynomials or the space C of all N ×N resp. T5,3 look as follows: matrices are examples of C-algebras. A subalgebra B of A is a linear subspace of A closed under multiplication a0 a1 a2 0 0 a a a 0 a a and containing 1A. For example, the space of all upper 2 3 4 0 1 triangular N × N matrices is a subalgebra of CN×N . An 0 0 0 a2 a3 resp. 0 0 a0 . algebra morphism T : A → B is a C-linear map that is also 0 0 0 0 a2 0 0 0 0 0 0 multiplicative and maps 1A to 1B.
Recommended publications
  • THREE STEPS on an OPEN ROAD Gilbert Strang This Note Describes
    Inverse Problems and Imaging doi:10.3934/ipi.2013.7.961 Volume 7, No. 3, 2013, 961{966 THREE STEPS ON AN OPEN ROAD Gilbert Strang Massachusetts Institute of Technology Cambridge, MA 02139, USA Abstract. This note describes three recent factorizations of banded invertible infinite matrices 1. If A has a banded inverse : A=BC with block{diagonal factors B and C. 2. Permutations factor into a shift times N < 2w tridiagonal permutations. 3. A = LP U with lower triangular L, permutation P , upper triangular U. We include examples and references and outlines of proofs. This note describes three small steps in the factorization of banded matrices. It is written to encourage others to go further in this direction (and related directions). At some point the problems will become deeper and more difficult, especially for doubly infinite matrices. Our main point is that matrices need not be Toeplitz or block Toeplitz for progress to be possible. An important theory is already established [2, 9, 10, 13-16] for non-Toeplitz \band-dominated operators". The Fredholm index plays a key role, and the second small step below (taken jointly with Marko Lindner) computes that index in the special case of permutation matrices. Recall that banded Toeplitz matrices lead to Laurent polynomials. If the scalars or matrices a−w; : : : ; a0; : : : ; aw lie along the diagonals, the polynomial is A(z) = P k akz and the bandwidth is w. The required index is in this case a winding number of det A(z). Factorization into A+(z)A−(z) is a classical problem solved by Plemelj [12] and Gohberg [6-7].
    [Show full text]
  • Arxiv:2105.00793V3 [Math.NA] 14 Jun 2021 Tubal Matrices
    Tubal Matrices Liqun Qi∗ and ZiyanLuo† June 15, 2021 Abstract It was shown recently that the f-diagonal tensor in the T-SVD factorization must satisfy some special properties. Such f-diagonal tensors are called s-diagonal tensors. In this paper, we show that such a discussion can be extended to any real invertible linear transformation. We show that two Eckart-Young like theo- rems hold for a third order real tensor, under any doubly real-preserving unitary transformation. The normalized Discrete Fourier Transformation (DFT) matrix, an arbitrary orthogonal matrix, the product of the normalized DFT matrix and an arbitrary orthogonal matrix are examples of doubly real-preserving unitary transformations. We use tubal matrices as a tool for our study. We feel that the tubal matrix language makes this approach more natural. Key words. Tubal matrix, tensor, T-SVD factorization, tubal rank, B-rank, Eckart-Young like theorems AMS subject classifications. 15A69, 15A18 1 Introduction arXiv:2105.00793v3 [math.NA] 14 Jun 2021 Tensor decompositions have wide applications in engineering and data science [11]. The most popular tensor decompositions include CP decomposition and Tucker decompo- sition as well as tensor train decomposition [11, 3, 17]. The tensor-tensor product (t-product) approach, developed by Kilmer, Martin, Bra- man and others [10, 1, 9, 8], is somewhat different. They defined T-product opera- tion such that a third order tensor can be regarded as a linear operator applied on ∗Department of Applied Mathematics, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong, China; ([email protected]). †Department of Mathematics, Beijing Jiaotong University, Beijing 100044, China.
    [Show full text]
  • Polynomials and Hankel Matrices
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Polynomials and Hankel Matrices Miroslav Fiedler Czechoslovak Academy of Sciences Institute of Mathematics iitnci 25 115 67 Praha 1, Czechoslovakia Submitted by V. Ptak ABSTRACT Compatibility of a Hankel n X n matrix W and a polynomial f of degree m, m < n, is defined. If m = n, compatibility means that HC’ = CfH where Cf is the companion matrix of f With a suitable generalization of Cr, this theorem is gener- alized to the case that m < n. INTRODUCTION By a Hankel matrix [S] we shall mean a square complex matrix which has, if of order n, the form ( ai +k), i, k = 0,. , n - 1. If H = (~y~+~) is a singular n X n Hankel matrix, the H-polynomial (Pi of H was defined 131 as the greatest common divisor of the determinants of all (r + 1) x (r + 1) submatrices~of the matrix where r is the rank of H. In other words, (Pi is that polynomial for which the nX(n+l)matrix I, 0 0 0 %fb) 0 i 0 0 0 1 LINEAR ALGEBRA AND ITS APPLICATIONS 66:235-248(1985) 235 ‘F’Elsevier Science Publishing Co., Inc., 1985 52 Vanderbilt Ave., New York, NY 10017 0024.3795/85/$3.30 236 MIROSLAV FIEDLER is the Smith normal form [6] of H,. It has also been shown [3] that qN is a (nonzero) polynomial of degree at most T. It is known [4] that to a nonsingular n X n Hankel matrix H = ((Y~+~)a linear pencil of polynomials of degree at most n can be assigned as follows: f(x) = fo + f,x + .
    [Show full text]
  • Fourier Transform, Convolution Theorem, and Linear Dynamical Systems April 28, 2016
    Mathematical Tools for Neuroscience (NEU 314) Princeton University, Spring 2016 Jonathan Pillow Lecture 23: Fourier Transform, Convolution Theorem, and Linear Dynamical Systems April 28, 2016. Discrete Fourier Transform (DFT) We will focus on the discrete Fourier transform, which applies to discretely sampled signals (i.e., vectors). Linear algebra provides a simple way to think about the Fourier transform: it is simply a change of basis, specifically a mapping from the time domain to a representation in terms of a weighted combination of sinusoids of different frequencies. The discrete Fourier transform is therefore equiv- alent to multiplying by an orthogonal (or \unitary", which is the same concept when the entries are complex-valued) matrix1. For a vector of length N, the matrix that performs the DFT (i.e., that maps it to a basis of sinusoids) is an N × N matrix. The k'th row of this matrix is given by exp(−2πikt), for k 2 [0; :::; N − 1] (where we assume indexing starts at 0 instead of 1), and t is a row vector t=0:N-1;. Recall that exp(iθ) = cos(θ) + i sin(θ), so this gives us a compact way to represent the signal with a linear superposition of sines and cosines. The first row of the DFT matrix is all ones (since exp(0) = 1), and so the first element of the DFT corresponds to the sum of the elements of the signal. It is often known as the \DC component". The next row is a complex sinusoid that completes one cycle over the length of the signal, and each subsequent row has a frequency that is an integer multiple of this \fundamental" frequency.
    [Show full text]
  • (Hessenberg) Eigenvalue-Eigenmatrix Relations∗
    (HESSENBERG) EIGENVALUE-EIGENMATRIX RELATIONS∗ JENS-PETER M. ZEMKE† Abstract. Explicit relations between eigenvalues, eigenmatrix entries and matrix elements are derived. First, a general, theoretical result based on the Taylor expansion of the adjugate of zI − A on the one hand and explicit knowledge of the Jordan decomposition on the other hand is proven. This result forms the basis for several, more practical and enlightening results tailored to non-derogatory, diagonalizable and normal matrices, respectively. Finally, inherent properties of (upper) Hessenberg, resp. tridiagonal matrix structure are utilized to construct computable relations between eigenvalues, eigenvector components, eigenvalues of principal submatrices and products of lower diagonal elements. Key words. Algebraic eigenvalue problem, eigenvalue-eigenmatrix relations, Jordan normal form, adjugate, principal submatrices, Hessenberg matrices, eigenvector components AMS subject classifications. 15A18 (primary), 15A24, 15A15, 15A57 1. Introduction. Eigenvalues and eigenvectors are defined using the relations Av = vλ and V −1AV = J. (1.1) We speak of a partial eigenvalue problem, when for a given matrix A ∈ Cn×n we seek scalar λ ∈ C and a corresponding nonzero vector v ∈ Cn. The scalar λ is called the eigenvalue and the corresponding vector v is called the eigenvector. We speak of the full or algebraic eigenvalue problem, when for a given matrix A ∈ Cn×n we seek its Jordan normal form J ∈ Cn×n and a corresponding (not necessarily unique) eigenmatrix V ∈ Cn×n. Apart from these constitutional relations, for some classes of structured matrices several more intriguing relations between components of eigenvectors, matrix entries and eigenvalues are known. For example, consider the so-called Jacobi matrices.
    [Show full text]
  • Mathematische Annalen Digital Object Identifier (DOI) 10.1007/S002080100153
    Math. Ann. (2001) Mathematische Annalen Digital Object Identifier (DOI) 10.1007/s002080100153 Pfaff τ-functions M. Adler · T. Shiota · P. van Moerbeke Received: 20 September 1999 / Published online: ♣ – © Springer-Verlag 2001 Consider the two-dimensional Toda lattice, with certain skew-symmetric initial condition, which is preserved along the locus s =−t of the space of time variables. Restricting the solution to s =−t, we obtain another hierarchy called Pfaff lattice, which has its own tau function, being equal to the square root of the restriction of 2D-Toda tau function. We study its bilinear and Fay identities, W and Virasoro symmetries, relation to symmetric and symplectic matrix integrals and quasiperiodic solutions. 0. Introduction Consider the set of equations ∂m∞ n ∂m∞ n = Λ m∞ , =−m∞(Λ ) ,n= 1, 2,..., (0.1) ∂tn ∂sn on infinite matrices m∞ = m∞(t, s) = (µi,j (t, s))0≤i,j<∞ , M. Adler∗ Department of Mathematics, Brandeis University, Waltham, MA 02454–9110, USA (e-mail: [email protected]) T. Shiota∗∗ Department of Mathematics, Kyoto University, Kyoto 606–8502, Japan (e-mail: [email protected]) P. van Moerbeke∗∗∗ Department of Mathematics, Universit´e de Louvain, 1348 Louvain-la-Neuve, Belgium Brandeis University, Waltham, MA 02454–9110, USA (e-mail: [email protected] and @math.brandeis.edu) ∗ The support of a National Science Foundation grant # DMS-98-4-50790 is gratefully ac- knowledged. ∗∗ The support of Japanese ministry of education’s grant-in-aid for scientific research, and the hospitality of the University of Louvain and Brandeis University are gratefully acknowledged.
    [Show full text]
  • The Discrete Fourier Transform
    Tutorial 2 - Learning about the Discrete Fourier Transform This tutorial will be about the Discrete Fourier Transform basis, or the DFT basis in short. What is a basis? If we google define `basis', we get: \the underlying support or foundation for an idea, argument, or process". In mathematics, a basis is similar. It is an underlying structure of how we look at something. It is similar to a coordinate system, where we can choose to describe a sphere in either the Cartesian system, the cylindrical system, or the spherical system. They will all describe the same thing, but in different ways. And the reason why we have different systems is because doing things in specific basis is easier than in others. For exam- ple, calculating the volume of a sphere is very hard in the Cartesian system, but easy in the spherical system. When working with discrete signals, we can treat each consecutive element of the vector of values as a consecutive measurement. This is the most obvious basis to look at a signal. Where if we have the vector [1, 2, 3, 4], then at time 0 the value was 1, at the next sampling time the value was 2, and so on, giving us a ramp signal. This is called a time series vector. However, there are also other basis for representing discrete signals, and one of the most useful of these is to use the DFT of the original vector, and to express our data not by the individual values of the data, but by the summation of different frequencies of sinusoids, which make up the data.
    [Show full text]
  • Elementary Invariants for Centralizers of Nilpotent Matrices
    J. Aust. Math. Soc. 86 (2009), 1–15 doi:10.1017/S1446788708000608 ELEMENTARY INVARIANTS FOR CENTRALIZERS OF NILPOTENT MATRICES JONATHAN BROWN and JONATHAN BRUNDAN ˛ (Received 7 March 2007; accepted 29 March 2007) Communicated by J. Du Abstract We construct an explicit set of algebraically independent generators for the center of the universal enveloping algebra of the centralizer of a nilpotent matrix in the general linear Lie algebra over a field of characteristic zero. In particular, this gives a new proof of the freeness of the center, a result first proved by Panyushev, Premet and Yakimova. 2000 Mathematics subject classification: primary 17B35. Keywords and phrases: nilpotent matrices, centralizers, symmetric invariants. 1. Introduction Let λ D (λ1; : : : ; λn/ be a composition of N such that either λ1 ≥ · · · ≥ λn or λ1 ≤ · · · ≤ λn. Let g be the Lie algebra glN .F/, where F is an algebraically closed field of characteristic zero. Let e 2 g be the nilpotent matrix consisting of Jordan blocks of sizes λ1; : : : ; λn in order down the diagonal, and write ge for the centralizer of e in g. g Panyushev et al. [PPY] have recently proved that S.ge/ e , the algebra of invariants for the adjoint action of ge on the symmetric algebra S.ge/, is a free polynomial algebra on N generators. Moreover, viewing S.ge/ as a graded algebra as usual so that ge is concentrated in degree one, they show that if x1;:::; xN are homogeneous generators g for S.ge/ e of degrees d1 ≤ · · · ≤ dN , then the sequence .d1;:::; dN / of invariant degrees is equal to λ1 1’s λ2 2’s λn n’s z }| { z }| { z }| { .1;:::; 1; 2;:::; 2;:::; n;:::; n/ if λ1 ≥ · · · ≥ λn, .1;:::; 1; 2;:::; 2;:::; n;:::; n/ if λ1 ≤ · · · ≤ λn.
    [Show full text]
  • Quantum Fourier Transform Revisited
    Quantum Fourier Transform Revisited Daan Camps1,∗, Roel Van Beeumen1, Chao Yang1, 1Computational Research Division, Lawrence Berkeley National Laboratory, CA, United States Abstract The fast Fourier transform (FFT) is one of the most successful numerical algorithms of the 20th century and has found numerous applications in many branches of computational science and engineering. The FFT algorithm can be derived from a particular matrix decomposition of the discrete Fourier transform (DFT) matrix. In this paper, we show that the quantum Fourier transform (QFT) can be derived by further decomposing the diagonal factors of the FFT matrix decomposition into products of matrices with Kronecker product structure. We analyze the implication of this Kronecker product structure on the discrete Fourier transform of rank-1 tensors on a classical computer. We also explain why such a structure can take advantage of an important quantum computer feature that enables the QFT algorithm to attain an exponential speedup on a quantum computer over the FFT algorithm on a classical computer. Further, the connection between the matrix decomposition of the DFT matrix and a quantum circuit is made. We also discuss a natural extension of a radix-2 QFT decomposition to a radix-d QFT decomposition. No prior knowledge of quantum computing is required to understand what is presented in this paper. Yet, we believe this paper may help readers to gain some rudimentary understanding of the nature of quantum computing from a matrix computation point of view. 1 Introduction The fast Fourier transform (FFT) [3] is a widely celebrated algorithmic innovation of the 20th century [19]. The algorithm allows us to perform a discrete Fourier transform (DFT) of a vector of size N in (N log N) O operations.
    [Show full text]
  • Circulant Matrix Constructed by the Elements of One of the Signals and a Vector Constructed by the Elements of the Other Signal
    Digital Image Processing Filtering in the Frequency Domain (Circulant Matrices and Convolution) Christophoros Nikou [email protected] University of Ioannina - Department of Computer Science and Engineering 2 Toeplitz matrices • Elements with constant value along the main diagonal and sub-diagonals. • For a NxN matrix, its elements are determined by a (2N-1)-length sequence tn | (N 1) n N 1 T(,)m n t mn t0 t 1 t 2 t(N 1) t t t 1 0 1 T tt22 t1 t t t t N 1 2 1 0 NN C. Nikou – Digital Image Processing (E12) 3 Toeplitz matrices (cont.) • Each row (column) is generated by a shift of the previous row (column). − The last element disappears. − A new element appears. T(,)m n t mn t0 t 1 t 2 t(N 1) t t t 1 0 1 T tt22 t1 t t t t N 1 2 1 0 NN C. Nikou – Digital Image Processing (E12) 4 Circulant matrices • Elements with constant value along the main diagonal and sub-diagonals. • For a NxN matrix, its elements are determined by a N-length sequence cn | 01nN C(,)m n c(m n )mod N c0 cNN 1 c 2 c1 c c c 1 01N C c21 c c02 cN cN 1 c c c c NN1 21 0 NN C. Nikou – Digital Image Processing (E12) 5 Circulant matrices (cont.) • Special case of a Toeplitz matrix. • Each row (column) is generated by a circular shift (modulo N) of the previous row (column). C(,)m n c(m n )mod N c0 cNN 1 c 2 c1 c c c 1 01N C c21 c c02 cN cN 1 c c c c NN1 21 0 NN C.
    [Show full text]
  • Matrix Algebraic Properties of the Fisher Information Matrix of Stationary Processes
    Entropy 2014, 16, 2023-2055; doi:10.3390/e16042023 OPEN ACCESS entropy ISSN 1099-4300 www.mdpi.com/journal/entropy Article Matrix Algebraic Properties of the Fisher Information Matrix of Stationary Processes Andre´ Klein Rothschild Blv. 123 Apt.7, 65271 Tel Aviv, Israel; E-Mail: [email protected] or [email protected]; Tel.: 972.5.25594723 Received: 12 February 2014; in revised form: 11 March 2014 / Accepted: 24 March 2014 / Published: 8 April 2014 Abstract: In this survey paper, a summary of results which are to be found in a series of papers, is presented. The subject of interest is focused on matrix algebraic properties of the Fisher information matrix (FIM) of stationary processes. The FIM is an ingredient of the Cramer-Rao´ inequality, and belongs to the basics of asymptotic estimation theory in mathematical statistics. The FIM is interconnected with the Sylvester, Bezout and tensor Sylvester matrices. Through these interconnections it is shown that the FIM of scalar and multiple stationary processes fulfill the resultant matrix property. A statistical distance measure involving entries of the FIM is presented. In quantum information, a different statistical distance measure is set forth. It is related to the Fisher information but where the information about one parameter in a particular measurement procedure is considered. The FIM of scalar stationary processes is also interconnected to the solutions of appropriate Stein equations, conditions for the FIM to verify certain Stein equations are formulated. The presence of Vandermonde matrices is also emphasized. Keywords: Bezout matrix; Sylvester matrix; tensor Sylvester matrix; Stein equation; Vandermonde matrix; stationary process; matrix resultant; Fisher information matrix MSC Classification: 15A23, 15A24, 15B99, 60G10, 62B10, 62M20.
    [Show full text]
  • Pre- and Post-Processing for Optimal Noise Reduction in Cyclic Prefix
    Pre- and Post-Processing for Optimal Noise Reduction in Cyclic Prefix Based Channel Equalizers Bojan Vrcelj and P. P. Vaidyanathan Dept. of Electrical Engineering 136-93 California Institute of Technology Pasadena, CA 91125-0001 Abstract— Cyclic prefix based equalizers are widely used for It is preceded (followed) by the optimal precoder (equalizer) high-speed data transmission over frequency selective channels. for the given input and noise statistics. These blocks are real- Their use in conjunction with DFT filterbanks is especially attrac- ized by constant matrix multiplication, so that the overall com- tive, given the low complexity of implementation. Some examples munications system remains of low complexity. include the DFT-based DMT systems. In this paper we consider In the following we first give a brief overview of the cyclic a general cyclic prefix based system for communication and show prefix system with DFT matrices used as the basic ISI can- that the equalization performance can be improved by simple pre- celer. Then, we introduce a way to deal with noise suppres- and post-processing aimed at reducing the noise at the receiver. This processing is done independently of the ISI cancellation per- sion separately and derive the optimal constrained pair pre- formed by the frequency domain equalizer.1 coder/equalizer for this purpose. The constraint is that in the absence of noise the overall system is still ISI-free. The per- formance of the proposed method is evaluated through com- I. INTRODUCTION puter simulations and a significant improvement with respect to There has been considerable interest in applying the eq– the original system without pre- and post-processing is demon- ualization techniques based on cyclic prefix to high speed data strated.
    [Show full text]