Appendix: Concepts of Linear Algebra

Total Page:16

File Type:pdf, Size:1020Kb

Appendix: Concepts of Linear Algebra Appendix: Concepts of Linear Algebra In this appendix, some essential topics in linear algebra are reviewed. For each topic, we present some definitions, basic properties, and numerical examples. Notations An m ! n matrix A consists of m rows and n columns and mn elements (real or complex numbers) and is denoted by a11 a12 ! a1n a21 a22 ! a2n mn A " " $aij %i,j 1 " $aij %mn " $aij %. " "#" " am1 am2 ! amn The element aii is called the ith diagonal element of A and aij for i ! j is called the &i,j'th element of A. We say the size of A is m ! n or the order of A is m when m " n.Anm ! 1 matrix is said to be an m-vector or a column m-vector; and an 1 ! n matrix is said to be an n-vector or a row n "vector. To avoid any confusion, an n-vector means a column vector in this appendix and a row vector is represented by the transpose (it will be defined shortly) of a column vector. Commonly, Rn and Cn are notations for the sets of real and complex column n-vectors, respectively; and Rm!n and Cm!n are notations for the sets that contain all m ! n real and complex matrices, respectively. If we do not specify the type of a matrix A, then A can be either real or complex. The following are examples of a 2 ! 3 matrix, a column 2-vector and a row 3-vector: 102" i 1 A " , v " , w " abc . "2.5 3i "4 "2 A is said to be a square matrix if m " n, otherwise a rectangular matrix. Z is said to be a zero matrix, denoted by Z " $0%mn " 0, if all elements of Z are zero. Matrix D is said to be an n ! n diagonal matrix if all elements of D are zero except its diagonal elements and is commonly written as D "diag d1, #, dn .Ann ! n diagonal matrix with all diagonal elements equal to 1 is called the n ! n identity matrix, denoted by In or I. A matrix T is said to be an upper (lower) triangular matrix if all its elements below (above) its diagonal are zero. A matrix S is said to be a submatrix of A if the rows and columns of S are consecutive rows and columns of A. If the rows and and columns start from the first ones, S is also called a leading submatrix of A. For example, 3 "14 1 "2 S " 12 is a submatrix of A " and S " is a leading submatrix 512 "45 1 "23 of A " "45"6 . 7 "89 Basic Operations Transpose and Hermitian m!n T Given A " $aij % in R , the transpose of A, denoted by A ,isann ! m matrix whose rows are columns of A and columns are rows of A. When A is in Cm!n, the Hermitian of A, denoted by A!, n!m # is in C and its &i,j'th element is aji. For example, 14 123 1 $ i "2i 1 " i 3 A " , AT " 25 and B " , B! " . 456 34" i 2i 4 $ i 36 Trace of a Square Matrix The trace of an n ! n real square matrix A " $aij % is defined by the sum of the diagonal elements of A, that is, tr A n a . & ' " $k"1 kk 12 Example Let A " . Then, tr&A' " 1 $ 4 " 5. 34 It is not difficult to show that tr&AB' "tr&BA' provided that AB and BA both exist. Dot Product (or Inner Product) and Orthogonality u1 v1 Given two vectors u " " and v " " in Cn, the dot product or inner product of u un vn and v is a scalar ! and is defined as v1 n ! # # # ! " u v " u1 ! un " " $ uk vk. k"1 vn 1 "4 $ i "3 Example Let u " , v " and w " . Then 2 " 3i 5 " 6i 2 "4 $ i "3 u!v " 12$ 3i " "12 $ 26i, and wTw " "32 " 13. 5 " 6i 2 ! Vectors u and v are said to be orthogonal if u v " 0. A set of vectors (v1,...,vm ) is said to be ! ! orthogonal if vi vj " 0 for all i ! j; and is said to be orthonormal if in addition vi vi " 1 for all i " 1,...,m. Consider vectors 1 2 1 1 1 2 u1 " , u2 " , v1 " and v2 " . "1 2 2 "1 22 2 The set u1, u2 is orthogonal and the set v1, v2 is orthonormal. The dot product satisfies the Cauchy-Schwarz Inequality: &x!y'2 % &x!x'&y!y' for any vectors x and y in Cn. Matrix Addition and Scalar Multiplication Two matrices with the same size can be added or subtracted element-wise, and a matrix can be multiplied by a scalar (real or complex) element-wise. Let A " $aij %mn, B " $bij %mn and !," be scalars. Then A $ B " $aij $ bij %mn, !A " $!aij %mn and !A $ "B " $!aij $ "bij %mn. 123 789 Example Let A " , B " and ! " "2j. Then 456 10 11 12 81012 "2j "4j "6j "11 "10 "9 A $ B " , !A " , and 3A " 2B " . 14 16 18 "8j "10j "12j "8 "7 "6 Matrix addition and scalar multiplication have the following properties: 1. A $ B " B $ A; 2. A $&B $ C'"&A $ B'$C; 3. &!"'A " !&"A' " "&!A'; 4. &A $ B'T " AT $ BT. Matrix Multiplication Given two matrices A " $aij % and B " $bkl % with sizes m ! r and r ! n, the product C " AB " $cij % is an m ! n matrix and its &i,j'thelement is defined as b1j r b2j cij " $ aktbkj " ai1 ai2 ! air k"1 " brj the dot product of the ith row of A " and the jth column of B. 12 1 "1 Example Let A " 34 , and B " . Then "12 56 "13 "1 "1 "1 2 "3 AB " "15 , BAT " , BB " , and 357 "35 "17 51117 AAT " 11 25 39 . 17 39 61 For a square matrix A, the notation An for a positive integer n stands for the product AA#A (n times) and A0 & I. Matrix multiplication has the following properties: 1. ABC " A&BC'"&AB'C; 2. &A $ B'C " AC $ BC; 3. A&B $ C' " AB $ AC; 4. &AB'T " BTAT if A and B are real, and &AB'! " B!A! if A and B are complex. In general, matrix multiplication is not commutative, i.e. AB ! BA even if both AB and BA are well-defined and have the same size. When A is a matrix and B is a vector, we can write AB in terms of the columns of A and elements of B, or the rows of A and vector B. Let A be an m ! n matrix and R1 A " C1 # Cn " " Rm b1 % % where Cis and Ris are columns and rows of A, respectively. Let B " " . Then bn R1B AB " b1C1 $ ! $ bnCm " " . RmB Partitioned Matrices In many applications it is convenient to partition a matrix into blocks (submatrices). For 123 A11 A12 example, the matrix A " 456 can be partitioned as A " where A21 A22 789 12 3 A11 " , A " , A21 " 78 , and A22 " $9%;or 45 12 6 4 56 A11 " 12 , A12 " $3%, A21 " , and A22 " . Operations on partitioned 7 89 matrices work as if the blocks were scalars. For example, A11 A12 A13 B11 B12 B13 A11$B11 A12$B12 A13$B13 $ " , A21 A22 A23 B21 B22 B23 A21$B21 A22$B22 A23$B23 A11 A12 A11B11$A12B21 A11B12$A12B22 B11 B12 A21 A22 " A21B11$A22B21 A21B12$A22B22 B21 B22 A31 A32 A31B11$A32B21 A31B12$A32B22 provided that all the block products are well-defined. Determinant of a Square Matrix Determinant The determinant of a square matrix A, denoted by det&A', is a scalar which provides some useful information about A. The determinants of 2 ! 2 and 3 ! 3 matrices are defined respectively as: a11 a12 det " a11a22 " a12a21, a21 a22 a11 a12 a13 a11a22a33 $ a21a13a32 $ a31a12a23 det a21 a22 a23 " . "a11a23a32 " a21a12a33 " a31a13a22 a31 a32 a33 For a general n ! n matrix A " $aij %, the determinant is defined as: n n i$k k$j det&A' " $&"1' aik det&Aik ' " $&"1' akj det&Akj ' k"1 k"1 for any 1 % i,j % n where Apq is the &n " 1' ! &n " 1' matrix resulting from the deletion of the row p and the column q of A. For example, 1 "23 let i " 1 5 "6 det "45"6 &"1'1$1&1'det " "89 7 "89 "4 "6 "45 $ &"1'1$2&"2'det $ &"1'1$3&3'det 79 7 "8 " &"3' " &"2'&6' $ &3'&"3' " 0 1 "23 let j " 2 "4 "6 det "45"6 &"1'1$2&"2'det " 79 7 "89 13 13 $ &"1'2$2&5'det $ &"1'3$2&"8'det 79 "4 "6 " "&"2'&6' $ &5'&"12' " &"8'&6' " 0 p$q The determinant of Apq, det&Apq ', is called the &p,q'th minor of A and &"1' det&Apq ' is called the cofactor of apq. Directly from the definition, the determinant of a diagonal matrix is the product of its diagonal elements and the determinant of an upper or lower triangular matrix is also the product of its diagonal elements. Determinants have the following properties: 1. det&AB' " det&A'det&B'; 2. det&!A' " !n det&A' for any scalar ! and n ! n matrix A; 3. det&AT ' " det&A'; 4. det Ak " &det&A''k; 5. det&A' " 0 if any row (or column) of A is a scalar multiple of another row (or column); 6.
Recommended publications
  • Linear Independence, Span, and Basis of a Set of Vectors What Is Linear Independence?
    LECTURENOTES · purdue university MA 26500 Kyle Kloster Linear Algebra October 22, 2014 Linear Independence, Span, and Basis of a Set of Vectors What is linear independence? A set of vectors S = fv1; ··· ; vkg is linearly independent if none of the vectors vi can be written as a linear combination of the other vectors, i.e. vj = α1v1 + ··· + αkvk. Suppose the vector vj can be written as a linear combination of the other vectors, i.e. there exist scalars αi such that vj = α1v1 + ··· + αkvk holds. (This is equivalent to saying that the vectors v1; ··· ; vk are linearly dependent). We can subtract vj to move it over to the other side to get an expression 0 = α1v1 + ··· αkvk (where the term vj now appears on the right hand side. In other words, the condition that \the set of vectors S = fv1; ··· ; vkg is linearly dependent" is equivalent to the condition that there exists αi not all of which are zero such that 2 3 α1 6α27 0 = v v ··· v 6 7 : 1 2 k 6 . 7 4 . 5 αk More concisely, form the matrix V whose columns are the vectors vi. Then the set S of vectors vi is a linearly dependent set if there is a nonzero solution x such that V x = 0. This means that the condition that \the set of vectors S = fv1; ··· ; vkg is linearly independent" is equivalent to the condition that \the only solution x to the equation V x = 0 is the zero vector, i.e. x = 0. How do you determine if a set is lin.
    [Show full text]
  • Arxiv:2105.00793V3 [Math.NA] 14 Jun 2021 Tubal Matrices
    Tubal Matrices Liqun Qi∗ and ZiyanLuo† June 15, 2021 Abstract It was shown recently that the f-diagonal tensor in the T-SVD factorization must satisfy some special properties. Such f-diagonal tensors are called s-diagonal tensors. In this paper, we show that such a discussion can be extended to any real invertible linear transformation. We show that two Eckart-Young like theo- rems hold for a third order real tensor, under any doubly real-preserving unitary transformation. The normalized Discrete Fourier Transformation (DFT) matrix, an arbitrary orthogonal matrix, the product of the normalized DFT matrix and an arbitrary orthogonal matrix are examples of doubly real-preserving unitary transformations. We use tubal matrices as a tool for our study. We feel that the tubal matrix language makes this approach more natural. Key words. Tubal matrix, tensor, T-SVD factorization, tubal rank, B-rank, Eckart-Young like theorems AMS subject classifications. 15A69, 15A18 1 Introduction arXiv:2105.00793v3 [math.NA] 14 Jun 2021 Tensor decompositions have wide applications in engineering and data science [11]. The most popular tensor decompositions include CP decomposition and Tucker decompo- sition as well as tensor train decomposition [11, 3, 17]. The tensor-tensor product (t-product) approach, developed by Kilmer, Martin, Bra- man and others [10, 1, 9, 8], is somewhat different. They defined T-product opera- tion such that a third order tensor can be regarded as a linear operator applied on ∗Department of Applied Mathematics, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong, China; ([email protected]). †Department of Mathematics, Beijing Jiaotong University, Beijing 100044, China.
    [Show full text]
  • Discover Linear Algebra Incomplete Preliminary Draft
    Discover Linear Algebra Incomplete Preliminary Draft Date: November 28, 2017 L´aszl´oBabai in collaboration with Noah Halford All rights reserved. Approved for instructional use only. Commercial distribution prohibited. c 2016 L´aszl´oBabai. Last updated: November 10, 2016 Preface TO BE WRITTEN. Babai: Discover Linear Algebra. ii This chapter last updated August 21, 2016 c 2016 L´aszl´oBabai. Contents Notation ix I Matrix Theory 1 Introduction to Part I 2 1 (F, R) Column Vectors 3 1.1 (F) Column vector basics . 3 1.1.1 The domain of scalars . 3 1.2 (F) Subspaces and span . 6 1.3 (F) Linear independence and the First Miracle of Linear Algebra . 8 1.4 (F) Dot product . 12 1.5 (R) Dot product over R ................................. 14 1.6 (F) Additional exercises . 14 2 (F) Matrices 15 2.1 Matrix basics . 15 2.2 Matrix multiplication . 18 2.3 Arithmetic of diagonal and triangular matrices . 22 2.4 Permutation Matrices . 24 2.5 Additional exercises . 26 3 (F) Matrix Rank 28 3.1 Column and row rank . 28 iii iv CONTENTS 3.2 Elementary operations and Gaussian elimination . 29 3.3 Invariance of column and row rank, the Second Miracle of Linear Algebra . 31 3.4 Matrix rank and invertibility . 33 3.5 Codimension (optional) . 34 3.6 Additional exercises . 35 4 (F) Theory of Systems of Linear Equations I: Qualitative Theory 38 4.1 Homogeneous systems of linear equations . 38 4.2 General systems of linear equations . 40 5 (F, R) Affine and Convex Combinations (optional) 42 5.1 (F) Affine combinations .
    [Show full text]
  • Problems in Abstract Algebra
    STUDENT MATHEMATICAL LIBRARY Volume 82 Problems in Abstract Algebra A. R. Wadsworth 10.1090/stml/082 STUDENT MATHEMATICAL LIBRARY Volume 82 Problems in Abstract Algebra A. R. Wadsworth American Mathematical Society Providence, Rhode Island Editorial Board Satyan L. Devadoss John Stillwell (Chair) Erica Flapan Serge Tabachnikov 2010 Mathematics Subject Classification. Primary 00A07, 12-01, 13-01, 15-01, 20-01. For additional information and updates on this book, visit www.ams.org/bookpages/stml-82 Library of Congress Cataloging-in-Publication Data Names: Wadsworth, Adrian R., 1947– Title: Problems in abstract algebra / A. R. Wadsworth. Description: Providence, Rhode Island: American Mathematical Society, [2017] | Series: Student mathematical library; volume 82 | Includes bibliographical references and index. Identifiers: LCCN 2016057500 | ISBN 9781470435837 (alk. paper) Subjects: LCSH: Algebra, Abstract – Textbooks. | AMS: General – General and miscellaneous specific topics – Problem books. msc | Field theory and polyno- mials – Instructional exposition (textbooks, tutorial papers, etc.). msc | Com- mutative algebra – Instructional exposition (textbooks, tutorial papers, etc.). msc | Linear and multilinear algebra; matrix theory – Instructional exposition (textbooks, tutorial papers, etc.). msc | Group theory and generalizations – Instructional exposition (textbooks, tutorial papers, etc.). msc Classification: LCC QA162 .W33 2017 | DDC 512/.02–dc23 LC record available at https://lccn.loc.gov/2016057500 Copying and reprinting. Individual readers of this publication, and nonprofit libraries acting for them, are permitted to make fair use of the material, such as to copy select pages for use in teaching or research. Permission is granted to quote brief passages from this publication in reviews, provided the customary acknowledgment of the source is given. Republication, systematic copying, or multiple reproduction of any material in this publication is permitted only under license from the American Mathematical Society.
    [Show full text]
  • Linear Independence, the Wronskian, and Variation of Parameters
    LINEAR INDEPENDENCE, THE WRONSKIAN, AND VARIATION OF PARAMETERS JAMES KEESLING In this post we determine when a set of solutions of a linear differential equation are linearly independent. We first discuss the linear space of solutions for a homogeneous differential equation. 1. Homogeneous Linear Differential Equations We start with homogeneous linear nth-order ordinary differential equations with general coefficients. The form for the nth-order type of equation is the following. dnx dn−1x (1) a (t) + a (t) + ··· + a (t)x = 0 n dtn n−1 dtn−1 0 It is straightforward to solve such an equation if the functions ai(t) are all constants. However, for general functions as above, it may not be so easy. However, we do have a principle that is useful. Because the equation is linear and homogeneous, if we have a set of solutions fx1(t); : : : ; xn(t)g, then any linear combination of the solutions is also a solution. That is (2) x(t) = C1x1(t) + C2x2(t) + ··· + Cnxn(t) is also a solution for any choice of constants fC1;C2;:::;Cng. Now if the solutions fx1(t); : : : ; xn(t)g are linearly independent, then (2) is the general solution of the differential equation. We will explain why later. What does it mean for the functions, fx1(t); : : : ; xn(t)g, to be linearly independent? The simple straightforward answer is that (3) C1x1(t) + C2x2(t) + ··· + Cnxn(t) = 0 implies that C1 = 0, C2 = 0, ::: , and Cn = 0 where the Ci's are arbitrary constants. This is the definition, but it is not so easy to determine from it just when the condition holds to show that a given set of functions, fx1(t); x2(t); : : : ; xng, is linearly independent.
    [Show full text]
  • Fourier Transform, Convolution Theorem, and Linear Dynamical Systems April 28, 2016
    Mathematical Tools for Neuroscience (NEU 314) Princeton University, Spring 2016 Jonathan Pillow Lecture 23: Fourier Transform, Convolution Theorem, and Linear Dynamical Systems April 28, 2016. Discrete Fourier Transform (DFT) We will focus on the discrete Fourier transform, which applies to discretely sampled signals (i.e., vectors). Linear algebra provides a simple way to think about the Fourier transform: it is simply a change of basis, specifically a mapping from the time domain to a representation in terms of a weighted combination of sinusoids of different frequencies. The discrete Fourier transform is therefore equiv- alent to multiplying by an orthogonal (or \unitary", which is the same concept when the entries are complex-valued) matrix1. For a vector of length N, the matrix that performs the DFT (i.e., that maps it to a basis of sinusoids) is an N × N matrix. The k'th row of this matrix is given by exp(−2πikt), for k 2 [0; :::; N − 1] (where we assume indexing starts at 0 instead of 1), and t is a row vector t=0:N-1;. Recall that exp(iθ) = cos(θ) + i sin(θ), so this gives us a compact way to represent the signal with a linear superposition of sines and cosines. The first row of the DFT matrix is all ones (since exp(0) = 1), and so the first element of the DFT corresponds to the sum of the elements of the signal. It is often known as the \DC component". The next row is a complex sinusoid that completes one cycle over the length of the signal, and each subsequent row has a frequency that is an integer multiple of this \fundamental" frequency.
    [Show full text]
  • Span, Linear Independence and Basis Rank and Nullity
    Remarks for Exam 2 in Linear Algebra Span, linear independence and basis The span of a set of vectors is the set of all linear combinations of the vectors. A set of vectors is linearly independent if the only solution to c1v1 + ::: + ckvk = 0 is ci = 0 for all i. Given a set of vectors, you can determine if they are linearly independent by writing the vectors as the columns of the matrix A, and solving Ax = 0. If there are any non-zero solutions, then the vectors are linearly dependent. If the only solution is x = 0, then they are linearly independent. A basis for a subspace S of Rn is a set of vectors that spans S and is linearly independent. There are many bases, but every basis must have exactly k = dim(S) vectors. A spanning set in S must contain at least k vectors, and a linearly independent set in S can contain at most k vectors. A spanning set in S with exactly k vectors is a basis. A linearly independent set in S with exactly k vectors is a basis. Rank and nullity The span of the rows of matrix A is the row space of A. The span of the columns of A is the column space C(A). The row and column spaces always have the same dimension, called the rank of A. Let r = rank(A). Then r is the maximal number of linearly independent row vectors, and the maximal number of linearly independent column vectors. So if r < n then the columns are linearly dependent; if r < m then the rows are linearly dependent.
    [Show full text]
  • The Discrete Fourier Transform
    Tutorial 2 - Learning about the Discrete Fourier Transform This tutorial will be about the Discrete Fourier Transform basis, or the DFT basis in short. What is a basis? If we google define `basis', we get: \the underlying support or foundation for an idea, argument, or process". In mathematics, a basis is similar. It is an underlying structure of how we look at something. It is similar to a coordinate system, where we can choose to describe a sphere in either the Cartesian system, the cylindrical system, or the spherical system. They will all describe the same thing, but in different ways. And the reason why we have different systems is because doing things in specific basis is easier than in others. For exam- ple, calculating the volume of a sphere is very hard in the Cartesian system, but easy in the spherical system. When working with discrete signals, we can treat each consecutive element of the vector of values as a consecutive measurement. This is the most obvious basis to look at a signal. Where if we have the vector [1, 2, 3, 4], then at time 0 the value was 1, at the next sampling time the value was 2, and so on, giving us a ramp signal. This is called a time series vector. However, there are also other basis for representing discrete signals, and one of the most useful of these is to use the DFT of the original vector, and to express our data not by the individual values of the data, but by the summation of different frequencies of sinusoids, which make up the data.
    [Show full text]
  • Quantum Fourier Transform Revisited
    Quantum Fourier Transform Revisited Daan Camps1,∗, Roel Van Beeumen1, Chao Yang1, 1Computational Research Division, Lawrence Berkeley National Laboratory, CA, United States Abstract The fast Fourier transform (FFT) is one of the most successful numerical algorithms of the 20th century and has found numerous applications in many branches of computational science and engineering. The FFT algorithm can be derived from a particular matrix decomposition of the discrete Fourier transform (DFT) matrix. In this paper, we show that the quantum Fourier transform (QFT) can be derived by further decomposing the diagonal factors of the FFT matrix decomposition into products of matrices with Kronecker product structure. We analyze the implication of this Kronecker product structure on the discrete Fourier transform of rank-1 tensors on a classical computer. We also explain why such a structure can take advantage of an important quantum computer feature that enables the QFT algorithm to attain an exponential speedup on a quantum computer over the FFT algorithm on a classical computer. Further, the connection between the matrix decomposition of the DFT matrix and a quantum circuit is made. We also discuss a natural extension of a radix-2 QFT decomposition to a radix-d QFT decomposition. No prior knowledge of quantum computing is required to understand what is presented in this paper. Yet, we believe this paper may help readers to gain some rudimentary understanding of the nature of quantum computing from a matrix computation point of view. 1 Introduction The fast Fourier transform (FFT) [3] is a widely celebrated algorithmic innovation of the 20th century [19]. The algorithm allows us to perform a discrete Fourier transform (DFT) of a vector of size N in (N log N) O operations.
    [Show full text]
  • Circulant Matrix Constructed by the Elements of One of the Signals and a Vector Constructed by the Elements of the Other Signal
    Digital Image Processing Filtering in the Frequency Domain (Circulant Matrices and Convolution) Christophoros Nikou [email protected] University of Ioannina - Department of Computer Science and Engineering 2 Toeplitz matrices • Elements with constant value along the main diagonal and sub-diagonals. • For a NxN matrix, its elements are determined by a (2N-1)-length sequence tn | (N 1) n N 1 T(,)m n t mn t0 t 1 t 2 t(N 1) t t t 1 0 1 T tt22 t1 t t t t N 1 2 1 0 NN C. Nikou – Digital Image Processing (E12) 3 Toeplitz matrices (cont.) • Each row (column) is generated by a shift of the previous row (column). − The last element disappears. − A new element appears. T(,)m n t mn t0 t 1 t 2 t(N 1) t t t 1 0 1 T tt22 t1 t t t t N 1 2 1 0 NN C. Nikou – Digital Image Processing (E12) 4 Circulant matrices • Elements with constant value along the main diagonal and sub-diagonals. • For a NxN matrix, its elements are determined by a N-length sequence cn | 01nN C(,)m n c(m n )mod N c0 cNN 1 c 2 c1 c c c 1 01N C c21 c c02 cN cN 1 c c c c NN1 21 0 NN C. Nikou – Digital Image Processing (E12) 5 Circulant matrices (cont.) • Special case of a Toeplitz matrix. • Each row (column) is generated by a circular shift (modulo N) of the previous row (column). C(,)m n c(m n )mod N c0 cNN 1 c 2 c1 c c c 1 01N C c21 c c02 cN cN 1 c c c c NN1 21 0 NN C.
    [Show full text]
  • Pre- and Post-Processing for Optimal Noise Reduction in Cyclic Prefix
    Pre- and Post-Processing for Optimal Noise Reduction in Cyclic Prefix Based Channel Equalizers Bojan Vrcelj and P. P. Vaidyanathan Dept. of Electrical Engineering 136-93 California Institute of Technology Pasadena, CA 91125-0001 Abstract— Cyclic prefix based equalizers are widely used for It is preceded (followed) by the optimal precoder (equalizer) high-speed data transmission over frequency selective channels. for the given input and noise statistics. These blocks are real- Their use in conjunction with DFT filterbanks is especially attrac- ized by constant matrix multiplication, so that the overall com- tive, given the low complexity of implementation. Some examples munications system remains of low complexity. include the DFT-based DMT systems. In this paper we consider In the following we first give a brief overview of the cyclic a general cyclic prefix based system for communication and show prefix system with DFT matrices used as the basic ISI can- that the equalization performance can be improved by simple pre- celer. Then, we introduce a way to deal with noise suppres- and post-processing aimed at reducing the noise at the receiver. This processing is done independently of the ISI cancellation per- sion separately and derive the optimal constrained pair pre- formed by the frequency domain equalizer.1 coder/equalizer for this purpose. The constraint is that in the absence of noise the overall system is still ISI-free. The per- formance of the proposed method is evaluated through com- I. INTRODUCTION puter simulations and a significant improvement with respect to There has been considerable interest in applying the eq– the original system without pre- and post-processing is demon- ualization techniques based on cyclic prefix to high speed data strated.
    [Show full text]
  • Spectral Analysis of the Adjacency Matrix of Random Geometric Graphs
    Spectral Analysis of the Adjacency Matrix of Random Geometric Graphs Mounia Hamidouche?, Laura Cottatellucciy, Konstantin Avrachenkov ? Departement of Communication Systems, EURECOM, Campus SophiaTech, 06410 Biot, France y Department of Electrical, Electronics, and Communication Engineering, FAU, 51098 Erlangen, Germany Inria, 2004 Route des Lucioles, 06902 Valbonne, France [email protected], [email protected], [email protected]. Abstract—In this article, we analyze the limiting eigen- multivariate statistics of high-dimensional data. In this case, value distribution (LED) of random geometric graphs the coordinates of the nodes can represent the attributes of (RGGs). The RGG is constructed by uniformly distribut- the data. Then, the metric imposed by the RGG depicts the ing n nodes on the d-dimensional torus Td ≡ [0; 1]d and similarity between the data. connecting two nodes if their `p-distance, p 2 [1; 1] is at In this work, the RGG is constructed by considering a most rn. In particular, we study the LED of the adjacency finite set Xn of n nodes, x1; :::; xn; distributed uniformly and matrix of RGGs in the connectivity regime, in which independently on the d-dimensional torus Td ≡ [0; 1]d. We the average vertex degree scales as log (n) or faster, i.e., choose a torus instead of a cube in order to avoid boundary Ω (log(n)). In the connectivity regime and under some effects. Given a geographical distance, rn > 0, we form conditions on the radius rn, we show that the LED of a graph by connecting two nodes xi; xj 2 Xn if their `p- the adjacency matrix of RGGs converges to the LED of distance, p 2 [1; 1] is at most rn, i.e., kxi − xjkp ≤ rn, the adjacency matrix of a deterministic geometric graph where k:kp is the `p-metric defined as (DGG) with nodes in a grid as n goes to infinity.
    [Show full text]