Appendix a Matrix-Vector Representation for Signal Transformation

Total Page:16

File Type:pdf, Size:1020Kb

Appendix a Matrix-Vector Representation for Signal Transformation Appendix A Matrix-Vector Representation for Signal Transformation A set of numbers can be used to represent discrete signals. These numbers carry a certain amount of information and are subject to change by various kinds of trans- formations, called systems. For example, a one-dimensional linear time-invariant system can be expressed by its corresponding impulse response. The output of the system is then determined by the convolution of the impulse response and the input signal. Convolution equations, in general, are too complicated to efficiently express related theories and algorithms. Analysis and representation of signal transformations can be substantially sim- plified by using matrix-vector representation, where a vector and a matrix, respec- tively, represent the corresponding signal and transformation. A.1 One-Dimensional Signals and Systems Suppose a one-dimensional system has input signal x(n), n ¼ 0, 1, ..., N À 1 and impulse response h(n). The output of the system can be expressed as the one-dimensional convolution: XNÀ1 ynðÞ¼ hnðÞÀ q xqðÞ,forn ¼ 0, 1, ..., N À 1: ðA:1Þ q¼0 By simply rewriting Eq. (A.1), we have yðÞ¼0 hðÞ0 xðÞþ0 hðÞÀ1 xðÞþ1 hðÞÀ2 xð2 ÞþÁÁÁ yðÞ¼1 hðÞ1 xðÞþ0 hðÞ0 xðÞþ1 hðÞÀ1 xð2 ÞþÁÁÁ ðA:2Þ ⋮ yNðÞ¼À 1 hNðÞÀ 1 xðÞþ0 hNðÞÀ 2 xðÞþ1 hNðÞÀ 3 xð2 ÞþÁÁÁ: © Springer International Publishing Switzerland 2016 249 M.A. Abidi et al., Optimization Techniques in Computer Vision, Advances in Computer Vision and Pattern Recognition, DOI 10.1007/978-3-319-46364-3 250 A Matrix-Vector Representation for Signal Transformation If we express both input and output signals as N  1 vectors, such as x ¼ ½xðÞ0 xð1 Þ ÁÁÁ xNðÞÀ 1 T and y ¼ ½yðÞ0 yð1 Þ ÁÁÁ yNðÞÀ 1 T; ðA:3Þ then the output vector is obtained by the following matrix-vector multiplication y ¼ Hx; ðA:4Þ where 2 3 hðÞ0 hðÞÀ1 hðÀ2 Þ ÁÁÁ hðÞÀN þ 1 6 7 6 7 6 hðÞ1 hðÞ0 hðÀ1 Þ ÁÁÁ hðÞÀN þ 2 7 6 7 6 7: : H ¼ 6 hðÞ2 hðÞ1 hð0 Þ ÁÁÁ hðÞÀN þ 3 7 ðA 5Þ 6 7 4 ⋮⋮⋮⋱⋮5 hNðÞÀ 1 hNðÞÀ 2 hNðÀ 3 Þ ÁÁÁ hðÞ0 We note that H is a Toeplitz matrix having constant elements along the main diagonal and sub-diagonals. If two convolving sequences are periodic with period N, their circular convolu- tion is also periodic. In case, hðÞ¼Àn hNðÞÀ n , which results in the circulant matrix and can be expressed as 2 3 hðÞ0 hNðÞÀ 1 hNðÀ 2 Þ ÁÁÁ hðÞ1 6 7 6 7 6 hðÞ1 hðÞ0 hNðÀ 1 Þ ÁÁÁ hðÞ2 7 6 7 6 7: : H ¼ 6 hðÞ2 hðÞ1 hð0 Þ ÁÁÁ hðÞ3 7 ðA 6Þ 6 7 4 ⋮⋮⋮⋱⋮5 hNðÞÀ 1 hNðÞÀ 2 hNðÀ 3 Þ ÁÁÁ hðÞ0 The first column of H is the same as that of the vector h ¼ ½hðÞ0 hð1 Þ ÁÁÁ hNðÞÀ 1 T, and the second column is the same as that of the rotated version of h indexed by one element, such as ½hNðÞÀ 1 hð0 Þ ÁÁÁ hNðÞÀ 2 T. The remaining columns are determined in the same manner. Example A.1: One-Dimensional Shift-Invariant Filtering and the Circulant Matrix Consider a discrete sequencefg 1 2 3454321. Suppose that the corresponding noisy observation is given as x ¼ ½1:10 1:80 3:10 4:20 5:10 3:70 3:20 2:10 0:70 T. One simple way to remove the noise is to replace each observed sample by the average of the neighboring samples. If we use an averaging filter that replaces each sample by A Matrix-Vector Representation for Signal Transformation 251 the average of two neighboring samples, plus the sample itself, we have the output y ¼ ½1:20 2:00 3:03 4:03 5:00 4:00 3:00 2:00 1:30 T, where the first and the last samples have been computed under the assumption that the input sequence is periodic with period 9, because they are located at a boundary and do not have enough neighboring samples for convolution with the impulse response. The averaging process can be expressed as a one-dimensional time-invariant system whose impulse response is 1 hnðÞ¼ fgδðÞþn þ 1 δðÞþn δðÞn À 1 : ðA:7Þ 3 We can make the corresponding circulant matrix by using the impulse response as 2 3 110ÁÁÁ 1 6 7 6 7 6 111ÁÁÁ 0 7 1 6 7 6 ⋱ 7: : H ¼ 6 011 0 7 ðA 8Þ 3 6 7 4 ⋮⋮⋱⋱⋮5 100ÁÁÁ 1 It is straightforward to prove that y ¼ Hx. A.2 Two-Dimensional Signals and Systems In the previous section we obtained the matrix-vector expression of one-dimensional convolution by mapping an input signal to a vector and the impulse response to a Toeplitz or circulant matrix. In a similar manner, we can also represent two-dimensional convolution as a matrix-vector expression by map- ping an input two-dimensional array into a row-ordered vector and the two-dimensional impulse response into a doubly block circulant matrix. A.2.1 Row-Ordered Vector Two-dimensional rectangular arrays or matrices usually represent image data. Representing two-dimensional image processing systems, however, becomes too complicated to be analyzed if we use two-dimensional matrices for the input and output signals. Based on the idea that both vectors and matrices can represent the same data, only in different formats, we can represent two-dimensional image data by using a row-ordered vector. 252 A Matrix-Vector Representation for Signal Transformation Let the following two-dimensional M  N array represent an image 2 3 xðÞ0; 0 xð0; 1 Þ ÁÁÁ xðÞ0, N À 1 6 7 6 ; ; 7 6 xðÞ1 0 xð1 1 Þ ÁÁÁ xðÞ1, N À 1 7 X ¼ 6 7; ðA:9Þ 4 ⋮⋮⋱⋮5 xMðÞÀ 1, 0 xMðÀ 1, 1 Þ ÁÁÁ xMðÞÀ 1, N À 1 which can also be represented by the row-ordered MN  1 vector, such as  à T x ¼ xðÞ0; 0 xðÞÁÁÁ0; 1 xðÞ0, N À 1 xð1; 0 Þ ÁÁÁ xðÞ1, N À 1 ÁÁÁ xMðÞÁÁÁÀ 1, 0 xMðÞÀ 1, N À 1 : ðA:10Þ A.2.2 Block Matrices A space-invariant two-dimensional system is characterized by a two-dimensional impulse response. The output of the system is determined by two-dimensional convolution, expressed as MXÀ1 XNÀ1 ymðÞ¼; n hmðÞÀ p, n À q xpðÞ; q ; ðA:11Þ p¼0 q¼0 where y(m, n), h(m, n), and x(m, n), respectively, represent the two-dimensional output, the impulse response, and the input signals. Like the one-dimensional case, two-dimensional convolution can also be expressed by matrix-vector multiplication. Example A.2: Two-Dimensional Space-Invariant Filtering and the Block Circulant Matrix Suppose that an N  N image x(m, n) is filtered by the two-dimensional low-pass filter with impulse response: 8 9 δ δ δ <> ðÞþm þ 1, n þ 1 2 ðÞþm þ 1, n ðÞm þ 1, n À 1 => 1 hmðÞ¼; n þ2δðÞþm, n þ 1 4δðÞþm; n 2δðÞm, n À 1 : ðA:12Þ 16 :> ;> þδðÞþm À 1, n þ 1 2δðÞþm À 1, n δðÞm À 1, n À 1 The output is obtained by two-dimensional convolution as given in Eq. (A.11). We can also express the two-dimensional convolution by multiplying the block matrix and the row-ordered vector. If we assume that both the impulse response and the A Matrix-Vector Representation for Signal Transformation 253 input signal are periodic with period N  N, it is straightforward to prove that the matrix-vector multiplication y ¼ Hx; ðA:13Þ is equivalent to the two-dimensional convolution, where the row-ordered vector x is obtained as in Eq. (A.10), and the block matrix is obtained as 2 3 H0 HÀ1 0 ÁÁÁ H1 6 7 6 7 6 H1 H0 HÀ1 ÁÁÁ 0 7 1 6 7 6 7: : H ¼ 6 0 H1 H0 ÁÁÁ 0 7 ðA 14Þ 16 6 7 4 ⋮⋮⋮⋱⋮5 HÀ1 00ÁÁÁ H0 Each element in H is again a matrix defined as 2 3 420ÁÁÁ 2 2 3 6 7 6 7 210ÁÁÁ 1 6 242ÁÁÁ 0 7 6 7 6 7 6 121ÁÁÁ 0 7 6 7 6 7 6 7 6 7: H0 ¼ 6 024ÁÁÁ 0 7, and H1 ¼ HÀ1 ¼ 6 012ÁÁÁ 0 7 6 7 6 7 6 7 4 ⋮⋮⋮⋱⋮5 4 ⋮⋮⋮⋱⋮5 100ÁÁÁ 2 200ÁÁÁ 4 ðA:15Þ Any matrix A whose elements are matrices is called a block matrix, such as 2 3 A0, 0 A0,1 ÁÁÁ A0, NÀ1 6 7 6 7 6 A1, 0 A1,1 ÁÁÁ A1, NÀ1 7 A ¼ 6 7; ðA:16Þ 4 ⋮⋮⋱⋮5 AMÀ1, 0 AMÀ1,1 ÁÁÁ AMÀ1, NÀ1 where Ai,j represents a p  q matrix. More specifically, the matrix A is called an m  n block matrix of basic dimension p  q. If the block structure is circulant, that is, Ai,j ¼ AimodM,jmodN, A is called block circulant. If each Ai,j is a circulant matrix, A is called a circulant block matrix. Finally, if A is both block circulant and circulant block, A is called doubly block circulant. 254 A Matrix-Vector Representation for Signal Transformation A.2.3 Kronecker Products If A and B are M1  M2 and N1  N2 matrices, respectively, their Kronecker product is defined as 2 3 aðÞ0; 0 B ÁÁÁ aðÞ0, M2 À 1 B 6 7 A B 4 ⋮⋱ ⋮ 5; ðA:17Þ aMðÞ1 À 1, 0 B ÁÁÁ aMðÞ1 À 1, M2 À 1 B which is an M1  M2 block matrix of basic dimension N1  N2.
Recommended publications
  • On Multivariate Interpolation
    On Multivariate Interpolation Peter J. Olver† School of Mathematics University of Minnesota Minneapolis, MN 55455 U.S.A. [email protected] http://www.math.umn.edu/∼olver Abstract. A new approach to interpolation theory for functions of several variables is proposed. We develop a multivariate divided difference calculus based on the theory of non-commutative quasi-determinants. In addition, intriguing explicit formulae that connect the classical finite difference interpolation coefficients for univariate curves with multivariate interpolation coefficients for higher dimensional submanifolds are established. † Supported in part by NSF Grant DMS 11–08894. April 6, 2016 1 1. Introduction. Interpolation theory for functions of a single variable has a long and distinguished his- tory, dating back to Newton’s fundamental interpolation formula and the classical calculus of finite differences, [7, 47, 58, 64]. Standard numerical approximations to derivatives and many numerical integration methods for differential equations are based on the finite dif- ference calculus. However, historically, no comparable calculus was developed for functions of more than one variable. If one looks up multivariate interpolation in the classical books, one is essentially restricted to rectangular, or, slightly more generally, separable grids, over which the formulae are a simple adaptation of the univariate divided difference calculus. See [19] for historical details. Starting with G. Birkhoff, [2] (who was, coincidentally, my thesis advisor), recent years have seen a renewed level of interest in multivariate interpolation among both pure and applied researchers; see [18] for a fairly recent survey containing an extensive bibli- ography. De Boor and Ron, [8, 12, 13], and Sauer and Xu, [61, 10, 65], have systemati- cally studied the polynomial case.
    [Show full text]
  • Arxiv:2105.00793V3 [Math.NA] 14 Jun 2021 Tubal Matrices
    Tubal Matrices Liqun Qi∗ and ZiyanLuo† June 15, 2021 Abstract It was shown recently that the f-diagonal tensor in the T-SVD factorization must satisfy some special properties. Such f-diagonal tensors are called s-diagonal tensors. In this paper, we show that such a discussion can be extended to any real invertible linear transformation. We show that two Eckart-Young like theo- rems hold for a third order real tensor, under any doubly real-preserving unitary transformation. The normalized Discrete Fourier Transformation (DFT) matrix, an arbitrary orthogonal matrix, the product of the normalized DFT matrix and an arbitrary orthogonal matrix are examples of doubly real-preserving unitary transformations. We use tubal matrices as a tool for our study. We feel that the tubal matrix language makes this approach more natural. Key words. Tubal matrix, tensor, T-SVD factorization, tubal rank, B-rank, Eckart-Young like theorems AMS subject classifications. 15A69, 15A18 1 Introduction arXiv:2105.00793v3 [math.NA] 14 Jun 2021 Tensor decompositions have wide applications in engineering and data science [11]. The most popular tensor decompositions include CP decomposition and Tucker decompo- sition as well as tensor train decomposition [11, 3, 17]. The tensor-tensor product (t-product) approach, developed by Kilmer, Martin, Bra- man and others [10, 1, 9, 8], is somewhat different. They defined T-product opera- tion such that a third order tensor can be regarded as a linear operator applied on ∗Department of Applied Mathematics, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong, China; ([email protected]). †Department of Mathematics, Beijing Jiaotong University, Beijing 100044, China.
    [Show full text]
  • Determinants of Commuting-Block Matrices by Istvan Kovacs, Daniel S
    Determinants of Commuting-Block Matrices by Istvan Kovacs, Daniel S. Silver*, and Susan G. Williams* Let R beacommutative ring, and Matn(R) the ring of n × n matrices over R.We (i,j) can regard a k × k matrix M =(A ) over Matn(R)asablock matrix,amatrix that has been partitioned into k2 submatrices (blocks)overR, each of size n × n. When M is regarded in this way, we denote its determinant by |M|.Wewill use the symbol D(M) for the determinant of M viewed as a k × k matrix over Matn(R). It is important to realize that D(M)isann × n matrix. Theorem 1. Let R be acommutative ring. Assume that M is a k × k block matrix of (i,j) blocks A ∈ Matn(R) that commute pairwise. Then | | | | (1,π(1)) (2,π(2)) ··· (k,π(k)) (1) M = D(M) = (sgn π)A A A . π∈Sk Here Sk is the symmetric group on k symbols; the summation is the usual one that appears in the definition of determinant. Theorem 1 is well known in the case k =2;the proof is often left as an exercise in linear algebra texts (see [4, page 164], for example). The general result is implicit in [3], but it is not widely known. We present a short, elementary proof using mathematical induction on k.Wesketch a second proof when the ring R has no zero divisors, a proof that is based on [3] and avoids induction by using the fact that commuting matrices over an algebraically closed field can be simultaneously triangularized.
    [Show full text]
  • Irreducibility in Algebraic Groups and Regular Unipotent Elements
    PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 141, Number 1, January 2013, Pages 13–28 S 0002-9939(2012)11898-2 Article electronically published on August 16, 2012 IRREDUCIBILITY IN ALGEBRAIC GROUPS AND REGULAR UNIPOTENT ELEMENTS DONNA TESTERMAN AND ALEXANDRE ZALESSKI (Communicated by Pham Huu Tiep) Abstract. We study (connected) reductive subgroups G of a reductive alge- braic group H,whereG contains a regular unipotent element of H.Themain result states that G cannot lie in a proper parabolic subgroup of H. This result is new even in the classical case H =SL(n, F ), the special linear group over an algebraically closed field, where a regular unipotent element is one whose Jor- dan normal form consists of a single block. In previous work, Saxl and Seitz (1997) determined the maximal closed positive-dimensional (not necessarily connected) subgroups of simple algebraic groups containing regular unipotent elements. Combining their work with our main result, we classify all reductive subgroups of a simple algebraic group H which contain a regular unipotent element. 1. Introduction Let H be a reductive linear algebraic group defined over an algebraically closed field F . Throughout this text ‘reductive’ will mean ‘connected reductive’. A unipo- tent element u ∈ H is said to be regular if the dimension of its centralizer CH (u) coincides with the rank of H (or, equivalently, u is contained in a unique Borel subgroup of H). Regular unipotent elements of a reductive algebraic group exist in all characteristics (see [22]) and form a single conjugacy class. These play an important role in the general theory of algebraic groups.
    [Show full text]
  • Fourier Transform, Convolution Theorem, and Linear Dynamical Systems April 28, 2016
    Mathematical Tools for Neuroscience (NEU 314) Princeton University, Spring 2016 Jonathan Pillow Lecture 23: Fourier Transform, Convolution Theorem, and Linear Dynamical Systems April 28, 2016. Discrete Fourier Transform (DFT) We will focus on the discrete Fourier transform, which applies to discretely sampled signals (i.e., vectors). Linear algebra provides a simple way to think about the Fourier transform: it is simply a change of basis, specifically a mapping from the time domain to a representation in terms of a weighted combination of sinusoids of different frequencies. The discrete Fourier transform is therefore equiv- alent to multiplying by an orthogonal (or \unitary", which is the same concept when the entries are complex-valued) matrix1. For a vector of length N, the matrix that performs the DFT (i.e., that maps it to a basis of sinusoids) is an N × N matrix. The k'th row of this matrix is given by exp(−2πikt), for k 2 [0; :::; N − 1] (where we assume indexing starts at 0 instead of 1), and t is a row vector t=0:N-1;. Recall that exp(iθ) = cos(θ) + i sin(θ), so this gives us a compact way to represent the signal with a linear superposition of sines and cosines. The first row of the DFT matrix is all ones (since exp(0) = 1), and so the first element of the DFT corresponds to the sum of the elements of the signal. It is often known as the \DC component". The next row is a complex sinusoid that completes one cycle over the length of the signal, and each subsequent row has a frequency that is an integer multiple of this \fundamental" frequency.
    [Show full text]
  • The Discrete Fourier Transform
    Tutorial 2 - Learning about the Discrete Fourier Transform This tutorial will be about the Discrete Fourier Transform basis, or the DFT basis in short. What is a basis? If we google define `basis', we get: \the underlying support or foundation for an idea, argument, or process". In mathematics, a basis is similar. It is an underlying structure of how we look at something. It is similar to a coordinate system, where we can choose to describe a sphere in either the Cartesian system, the cylindrical system, or the spherical system. They will all describe the same thing, but in different ways. And the reason why we have different systems is because doing things in specific basis is easier than in others. For exam- ple, calculating the volume of a sphere is very hard in the Cartesian system, but easy in the spherical system. When working with discrete signals, we can treat each consecutive element of the vector of values as a consecutive measurement. This is the most obvious basis to look at a signal. Where if we have the vector [1, 2, 3, 4], then at time 0 the value was 1, at the next sampling time the value was 2, and so on, giving us a ramp signal. This is called a time series vector. However, there are also other basis for representing discrete signals, and one of the most useful of these is to use the DFT of the original vector, and to express our data not by the individual values of the data, but by the summation of different frequencies of sinusoids, which make up the data.
    [Show full text]
  • 9. Properties of Matrices Block Matrices
    9. Properties of Matrices Block Matrices It is often convenient to partition a matrix M into smaller matrices called blocks, like so: 01 2 3 11 ! B C B4 5 6 0C A B M = B C = @7 8 9 1A C D 0 1 2 0 01 2 31 011 B C B C Here A = @4 5 6A, B = @0A, C = 0 1 2 , D = (0). 7 8 9 1 • The blocks of a block matrix must fit together to form a rectangle. So ! ! B A C B makes sense, but does not. D C D A • There are many ways to cut up an n × n matrix into blocks. Often context or the entries of the matrix will suggest a useful way to divide the matrix into blocks. For example, if there are large blocks of zeros in a matrix, or blocks that look like an identity matrix, it can be useful to partition the matrix accordingly. • Matrix operations on block matrices can be carried out by treating the blocks as matrix entries. In the example above, ! ! A B A B M 2 = C D C D ! A2 + BC AB + BD = CA + DC CB + D2 1 Computing the individual blocks, we get: 0 30 37 44 1 2 B C A + BC = @ 66 81 96 A 102 127 152 0 4 1 B C AB + BD = @10A 16 0181 B C CA + DC = @21A 24 CB + D2 = (2) Assembling these pieces into a block matrix gives: 0 30 37 44 4 1 B C B 66 81 96 10C B C @102 127 152 16A 4 10 16 2 This is exactly M 2.
    [Show full text]
  • Quantum Fourier Transform Revisited
    Quantum Fourier Transform Revisited Daan Camps1,∗, Roel Van Beeumen1, Chao Yang1, 1Computational Research Division, Lawrence Berkeley National Laboratory, CA, United States Abstract The fast Fourier transform (FFT) is one of the most successful numerical algorithms of the 20th century and has found numerous applications in many branches of computational science and engineering. The FFT algorithm can be derived from a particular matrix decomposition of the discrete Fourier transform (DFT) matrix. In this paper, we show that the quantum Fourier transform (QFT) can be derived by further decomposing the diagonal factors of the FFT matrix decomposition into products of matrices with Kronecker product structure. We analyze the implication of this Kronecker product structure on the discrete Fourier transform of rank-1 tensors on a classical computer. We also explain why such a structure can take advantage of an important quantum computer feature that enables the QFT algorithm to attain an exponential speedup on a quantum computer over the FFT algorithm on a classical computer. Further, the connection between the matrix decomposition of the DFT matrix and a quantum circuit is made. We also discuss a natural extension of a radix-2 QFT decomposition to a radix-d QFT decomposition. No prior knowledge of quantum computing is required to understand what is presented in this paper. Yet, we believe this paper may help readers to gain some rudimentary understanding of the nature of quantum computing from a matrix computation point of view. 1 Introduction The fast Fourier transform (FFT) [3] is a widely celebrated algorithmic innovation of the 20th century [19]. The algorithm allows us to perform a discrete Fourier transform (DFT) of a vector of size N in (N log N) O operations.
    [Show full text]
  • Circulant Matrix Constructed by the Elements of One of the Signals and a Vector Constructed by the Elements of the Other Signal
    Digital Image Processing Filtering in the Frequency Domain (Circulant Matrices and Convolution) Christophoros Nikou [email protected] University of Ioannina - Department of Computer Science and Engineering 2 Toeplitz matrices • Elements with constant value along the main diagonal and sub-diagonals. • For a NxN matrix, its elements are determined by a (2N-1)-length sequence tn | (N 1) n N 1 T(,)m n t mn t0 t 1 t 2 t(N 1) t t t 1 0 1 T tt22 t1 t t t t N 1 2 1 0 NN C. Nikou – Digital Image Processing (E12) 3 Toeplitz matrices (cont.) • Each row (column) is generated by a shift of the previous row (column). − The last element disappears. − A new element appears. T(,)m n t mn t0 t 1 t 2 t(N 1) t t t 1 0 1 T tt22 t1 t t t t N 1 2 1 0 NN C. Nikou – Digital Image Processing (E12) 4 Circulant matrices • Elements with constant value along the main diagonal and sub-diagonals. • For a NxN matrix, its elements are determined by a N-length sequence cn | 01nN C(,)m n c(m n )mod N c0 cNN 1 c 2 c1 c c c 1 01N C c21 c c02 cN cN 1 c c c c NN1 21 0 NN C. Nikou – Digital Image Processing (E12) 5 Circulant matrices (cont.) • Special case of a Toeplitz matrix. • Each row (column) is generated by a circular shift (modulo N) of the previous row (column). C(,)m n c(m n )mod N c0 cNN 1 c 2 c1 c c c 1 01N C c21 c c02 cN cN 1 c c c c NN1 21 0 NN C.
    [Show full text]
  • Pre- and Post-Processing for Optimal Noise Reduction in Cyclic Prefix
    Pre- and Post-Processing for Optimal Noise Reduction in Cyclic Prefix Based Channel Equalizers Bojan Vrcelj and P. P. Vaidyanathan Dept. of Electrical Engineering 136-93 California Institute of Technology Pasadena, CA 91125-0001 Abstract— Cyclic prefix based equalizers are widely used for It is preceded (followed) by the optimal precoder (equalizer) high-speed data transmission over frequency selective channels. for the given input and noise statistics. These blocks are real- Their use in conjunction with DFT filterbanks is especially attrac- ized by constant matrix multiplication, so that the overall com- tive, given the low complexity of implementation. Some examples munications system remains of low complexity. include the DFT-based DMT systems. In this paper we consider In the following we first give a brief overview of the cyclic a general cyclic prefix based system for communication and show prefix system with DFT matrices used as the basic ISI can- that the equalization performance can be improved by simple pre- celer. Then, we introduce a way to deal with noise suppres- and post-processing aimed at reducing the noise at the receiver. This processing is done independently of the ISI cancellation per- sion separately and derive the optimal constrained pair pre- formed by the frequency domain equalizer.1 coder/equalizer for this purpose. The constraint is that in the absence of noise the overall system is still ISI-free. The per- formance of the proposed method is evaluated through com- I. INTRODUCTION puter simulations and a significant improvement with respect to There has been considerable interest in applying the eq– the original system without pre- and post-processing is demon- ualization techniques based on cyclic prefix to high speed data strated.
    [Show full text]
  • Spectral Analysis of the Adjacency Matrix of Random Geometric Graphs
    Spectral Analysis of the Adjacency Matrix of Random Geometric Graphs Mounia Hamidouche?, Laura Cottatellucciy, Konstantin Avrachenkov ? Departement of Communication Systems, EURECOM, Campus SophiaTech, 06410 Biot, France y Department of Electrical, Electronics, and Communication Engineering, FAU, 51098 Erlangen, Germany Inria, 2004 Route des Lucioles, 06902 Valbonne, France [email protected], [email protected], [email protected]. Abstract—In this article, we analyze the limiting eigen- multivariate statistics of high-dimensional data. In this case, value distribution (LED) of random geometric graphs the coordinates of the nodes can represent the attributes of (RGGs). The RGG is constructed by uniformly distribut- the data. Then, the metric imposed by the RGG depicts the ing n nodes on the d-dimensional torus Td ≡ [0; 1]d and similarity between the data. connecting two nodes if their `p-distance, p 2 [1; 1] is at In this work, the RGG is constructed by considering a most rn. In particular, we study the LED of the adjacency finite set Xn of n nodes, x1; :::; xn; distributed uniformly and matrix of RGGs in the connectivity regime, in which independently on the d-dimensional torus Td ≡ [0; 1]d. We the average vertex degree scales as log (n) or faster, i.e., choose a torus instead of a cube in order to avoid boundary Ω (log(n)). In the connectivity regime and under some effects. Given a geographical distance, rn > 0, we form conditions on the radius rn, we show that the LED of a graph by connecting two nodes xi; xj 2 Xn if their `p- the adjacency matrix of RGGs converges to the LED of distance, p 2 [1; 1] is at most rn, i.e., kxi − xjkp ≤ rn, the adjacency matrix of a deterministic geometric graph where k:kp is the `p-metric defined as (DGG) with nodes in a grid as n goes to infinity.
    [Show full text]
  • 1 1.1. the DFT Matrix
    FFT January 20, 2016 1 1.1. The DFT matrix. The DFT matrix. By definition, the sequence f(τ)(τ = 0; 1; 2;:::;N − 1), posesses a discrete Fourier transform F (ν)(ν = 0; 1; 2;:::;N − 1), given by − 1 NX1 F (ν) = f(τ)e−i2π(ν=N)τ : (1.1) N τ=0 Of course, this definition can be immediately rewritten in the matrix form as follows 2 3 2 3 F (1) f(1) 6 7 6 7 6 7 6 7 6 F (2) 7 1 6 f(2) 7 6 . 7 = p F 6 . 7 ; (1.2) 4 . 5 N 4 . 5 F (N − 1) f(N − 1) where the DFT (i.e., the discrete Fourier transform) matrix is defined by 2 3 1 1 1 1 · 1 6 − 7 6 1 w w2 w3 ··· wN 1 7 6 7 h i 6 1 w2 w4 w6 ··· w2(N−1) 7 1 − − 1 6 7 F = p w(k 1)(j 1) = p 6 3 6 9 ··· 3(N−1) 7 N 1≤k;j≤N N 6 1 w w w w 7 6 . 7 4 . 5 1 wN−1 w2(N−1) w3(N−1) ··· w(N−1)(N−1) (1.3) 2πi with w = e N being the primitive N-th root of unity. 1.2. The IDFT matrix. To recover N values of the function from its discrete Fourier transform we simply have to invert the DFT matrix to obtain 2 3 2 3 f(1) F (1) 6 7 6 7 6 f(2) 7 p 6 F (2) 7 6 7 −1 6 7 6 .
    [Show full text]