5. Orthogonal Matrices

Total Page:16

File Type:pdf, Size:1020Kb

5. Orthogonal Matrices L. Vandenberghe ECE133A (Spring 2021) 5. Orthogonal matrices matrices with orthonormal columns • orthogonal matrices • tall matrices with orthonormal columns • complex matrices with orthonormal columns • 5.1 Orthonormal vectors a collection of real <-vectors 01, 02,..., 0= is orthonormal if the vectors have unit norm: 08 = 1 • k k ) they are mutually orthogonal: 0 0 9 = 0 if 8 < 9 • 8 Example 2 0 3 2 1 3 2 1 3 6 7 1 6 7 1 6 7 6 0 7 , 6 1 7 , 6 1 7 6 7 p 6 7 p 6 − 7 6 1 7 2 6 0 7 2 6 0 7 4 − 5 4 5 4 5 Orthogonal matrices 5.2 Matrix with orthonormal columns < = R has orthonormal columns if its Gram matrix is the identity matrix: 2 × ) ) = 0 0 0= 0 0 0= 1 2 ··· 1 2 ··· 0) 0 0) 0 0) 0 2 1 2 = 3 6 )1 )1 ··· )1 7 6 0 01 0 02 0 0= 7 = 6 2. 2. ···. 2. 7 6 . 7 6 ) ) ) 7 6 0= 0 0= 0 0= 0= 7 4 1 2 ··· 5 2 1 0 0 3 6 ··· 7 6 0 1 0 7 = 6 . ···. 7 6 . 7 6 7 6 0 0 1 7 4 ··· 5 there is no standard short name for “matrix with orthonormal columns” Orthogonal matrices 5.3 Matrix-vector product < = if R has orthonormal columns, then the linear function 5 G = G 2 × ¹ º preserves inner products: • G ) H = G) ) H = G) H ¹ º ¹ º preserves norms: • 1 2 ) / ) 1 2 G = G G = G G / = G k k ¹ º ¹ º ¹ º k k preserves distances: G H = G H • k − k k − k preserves angles: • G ) H G) H \ G, H = arccos ¹ º ¹ º = arccos = \ G, H ¹ º G H G H ¹ º k kk k k kk k Orthogonal matrices 5.4 Left-invertibility < = if R has orthonormal columns, then 2 × ) is left-invertible with left inverse : by definition • ) = has linearly independent columns (from page 4.24 or page 5.2): • ) G = 0 = G = G = 0 ) is tall or square: < = (see page 4.13) • ≥ Orthogonal matrices 5.5 Outline matrices with orthonormal columns • orthogonal matrices • tall matrices with orthonormal columns • complex matrices with orthonormal columns • Orthogonal matrix Orthogonal matrix a square real matrix with orthonormal columns is called orthogonal Nonsingularity (from equivalences on page 4.14): if is orthogonal, then ) is invertible, with inverse : • ) = = ) = is square ) ) is also an orthogonal matrix • rows of are orthonormal (have norm one and are mutually orthogonal) • < = ) Note: if R has orthonormal columns and < ¡ =, then < 2 × Orthogonal matrices 5.6 Permutation matrix let c = c , c , . , c= be a permutation (reordering) of 1, 2, . , = • ¹ 1 2 º ¹ º we associate with c the = = permutation matrix • × 8c8 = 1, 8 9 = 0 if 9 < c8 G is a permutation of the elements of G: G = Gc ,Gc ,...,Gc • ¹ 1 2 =º has exactly one element equal to 1 in each row and each column • Orthogonality: permutation matrices are orthogonal ) = because has exactly one element equal to one in each row • = ) X 1 8 = 9 8 9 = :8 : 9 = 0 otherwise ¹ º :=1 ) = 1 is the inverse permutation matrix • − Orthogonal matrices 5.7 Example permutation on 1, 2, 3, 4 • f g c , c , c , c = 2, 4, 1, 3 ¹ 1 2 3 4º ¹ º corresponding permutation matrix and its inverse • 2 0 1 0 0 3 2 0 0 1 0 3 6 7 6 7 6 0 0 0 1 7 1 ) 6 1 0 0 0 7 = 6 7 , − = = 6 7 6 1 0 0 0 7 6 0 0 0 1 7 6 7 6 7 6 0 0 1 0 7 6 0 1 0 0 7 4 5 4 5 ) is permutation matrix associated with the permutation • c˜ , c˜ , c˜ , c˜ = 3, 1, 4, 2 ¹ 1 2 3 4º ¹ º Orthogonal matrices 5.8 Plane rotation Rotation in a plane Ax cos \ sin \ = − sin \ cos \ θ x = Rotation in a coordinate plane in R : for example, 2 cos \ 0 sin \ 3 6 − 7 = 6 0 1 0 7 6 7 6 sin \ 0 cos \ 7 4 5 describes a rotation in the G ,G plane in R3 ¹ 1 3º Orthogonal matrices 5.9 Reflector Reflector: a matrix of the form ) = 200 − with 0 a unit-norm vector ( 0 = 1) k k Properties a reflector matrix is symmetric • a reflector matrix is orthogonal • ) ) ) ) ) ) = 200 200 = 400 400 00 = ¹ − º¹ − º − ¸ Orthogonal matrices 5.10 Geometrical interpretation of reflector x 0 H y = I aaT x ( − ) line through a and origin z = Ax = I 2aaT x ( − ) ) = D 0 D = 0 is the (hyper-)plane of vectors orthogonal to 0 • f j g if 0 = 1, the projection of G on is given by • k k H = G 0)G 0 = G 0 0)G = 00) G − ¹ º − ¹ º ¹ − º (see next page) reflection of G through the hyperplane is given by product with reflector: • ) I = H H G = 200 G ¸ ¹ − º ¹ − º Orthogonal matrices 5.11 Exercise ) suppose 0 = 1; show that the projection of G on = D 0 D = 0 is k k f j g H = G 0)G 0 − ¹ º we verify that H : • 2 ) ) ) ) ) ) ) ) 0 H = 0 G 0 0 G = 0 G 0 0 0 G = 0 G 0 G = 0 ¹ − ¹ ºº − ¹ º¹ º − now consider any I with I < H and show that G I ¡ G H : • 2 k − k k − k G I 2 = G H H I 2 k − k k − ¸ − k ) = G H 2 2 G H H I H I 2 k − k ¸ ¹ )− º) ¹ − º ¸ k − k = G H 2 2 0 G 0 H I H I 2 k − k ¸ ¹ º ¹ − º ¸ k − k) ) = G H 2 H I 2 (because 0 H = 0 I = 0) k − k ¸ k − k ¡ G H 2 k − k Orthogonal matrices 5.12 Product of orthogonal matrices if 1,..., : are orthogonal matrices and of equal size, then the product = : 1 2 ··· is orthogonal: ) ) = 12 : 12 : ¹ ) ···) ) º ¹ ··· º = : : ··· 2 1 1 2 ··· = Orthogonal matrices 5.13 Linear equation with orthogonal matrix linear equation with orthogonal coefficient matrix of size = = × G = 1 solution is 1 ) G = − 1 = 1 can be computed in 2=2 flops by matrix-vector multiplication • cost is less than order =2 if has special properties; for example, • permutation matrix: 0 flops reflector (given 0): order = flops plane rotation: order 1 flops Orthogonal matrices 5.14 Outline matrices with orthonormal columns • orthogonal matrices • tall matrices with orthonormal columns • complex matrices with orthonormal columns • Tall matrix with orthonormal columns < = suppose R is tall (< ¡ =) and has orthonormal columns 2 × ) is a left inverse of : • ) = has no right inverse; in particular • ) < ) on the next pages, we give a geometric interpretation to the matrix Orthogonal matrices 5.15 Range the span of a collection of vectors is the set of all their linear combinations: • = span 0 , 0 , . , 0= = G 0 G 0 G=0= G R ¹ 1 2 º f 1 1 ¸ 2 2 ¸ · · · ¸ j 2 g < = the range of a matrix R is the span of its column vectors: • 2 × = range = G G R ¹ º f j 2 g Example 2 1 0 3 8 2 G 3 9 6 7 ><> 6 1 7 >=> range 6 1 2 7 = 6 G 2G 7 G ,G R ¹6 7º 6 1 ¸ 2 7 j 1 2 2 6 0 1 7 > 6 G 7 > 4 − 5 : 4 − 2 5 ; Orthogonal matrices 5.16 Projection on range of matrix with orthonormal columns < = suppose R has orthonormal columns; we show that the vector 2 × ) 1 is the orthogonal projection of an <-vector 1 on range ¹ º b AAT b range A ( ) ) Gˆ = 1 satisfies Gˆ 1 < G 1 for all G < Gˆ • k − k k − k this extends the result on page 2.12 (where = 1 0 0) • ¹ /k kº Orthogonal matrices 5.17 Proof the squared distance of 1 to an arbitrary point G in range is ¹ º ) G 1 2 = G Gˆ Gˆ 1 2 (where Gˆ = 1) k − k k ¹ − º ¸ − k ) ) = G Gˆ 2 Gˆ 1 2 2 G Gˆ Gˆ 1 k ¹ − ºk ¸ k − k ¸ ¹ − º ¹ − º = G Gˆ 2 Gˆ 1 2 k ¹ − ºk ¸ k − k = G Gˆ 2 Gˆ 1 2 k − k ¸ k − k Gˆ 1 2 ≥ k − k with equality only if G = Gˆ ) ) line 3 follows because Gˆ 1 = Gˆ 1 = 0 • ¹ − º − ) line 4 follows from = • Orthogonal matrices 5.18 Outline matrices with orthonormal columns • orthogonal matrices • tall matrices with orthonormal columns • complex matrices with orthonormal columns • Gram matrix < = C has orthonormal columns if its Gram matrix is the identity matrix: 2 × = 0 0 0= 0 0 0= 1 2 ··· 1 2 ··· 00 00 00 2 1 2 = 3 6 1 1 ··· 1 7 6 0 01 0 02 0 0= 7 = 6 2. 2. ··· 2. 7 6 . 7 6 7 6 0= 0 0= 0 0= 0= 7 4 1 2 ··· 5 2 1 0 0 3 6 ··· 7 6 0 1 0 7 = 6 . ···. 7 6 . 7 6 7 6 0 0 1 7 4 ··· 5 columns have unit norm: 08 2 = 0 08 = 1 • k k 8 columns are mutually orthogonal: 0 0 9 = 0 for 8 < 9 • 8 Orthogonal matrices 5.19 Unitary matrix Unitary matrix a square complex matrix with orthonormal columns is called unitary Inverse = = = is square ) a unitary matrix is nonsingular with inverse • if is unitary, then is unitary • Orthogonal matrices 5.20 Discrete Fourier transform matrix c = recall definition from page 3.37 (with l = 42 j and j = p 1) / − 2 1 1 1 1 3 6 1 2 ··· = 1 7 6 1 l− l− l−¹ − º 7 6 2 4 ··· 2 = 1 7 , = 6 1 l− l− l− ¹ − º 7 6 .
Recommended publications
  • Lecture Notes: Qubit Representations and Rotations
    Phys 711 Topics in Particles & Fields | Spring 2013 | Lecture 1 | v0.3 Lecture notes: Qubit representations and rotations Jeffrey Yepez Department of Physics and Astronomy University of Hawai`i at Manoa Watanabe Hall, 2505 Correa Road Honolulu, Hawai`i 96822 E-mail: [email protected] www.phys.hawaii.edu/∼yepez (Dated: January 9, 2013) Contents mathematical object (an abstraction of a two-state quan- tum object) with a \one" state and a \zero" state: I. What is a qubit? 1 1 0 II. Time-dependent qubits states 2 jqi = αj0i + βj1i = α + β ; (1) 0 1 III. Qubit representations 2 A. Hilbert space representation 2 where α and β are complex numbers. These complex B. SU(2) and O(3) representations 2 numbers are called amplitudes. The basis states are or- IV. Rotation by similarity transformation 3 thonormal V. Rotation transformation in exponential form 5 h0j0i = h1j1i = 1 (2a) VI. Composition of qubit rotations 7 h0j1i = h1j0i = 0: (2b) A. Special case of equal angles 7 In general, the qubit jqi in (1) is said to be in a superpo- VII. Example composite rotation 7 sition state of the two logical basis states j0i and j1i. If References 9 α and β are complex, it would seem that a qubit should have four free real-valued parameters (two magnitudes and two phases): I. WHAT IS A QUBIT? iθ0 α φ0 e jqi = = iθ1 : (3) Let us begin by introducing some notation: β φ1 e 1 state (called \minus" on the Bloch sphere) Yet, for a qubit to contain only one classical bit of infor- 0 mation, the qubit need only be unimodular (normalized j1i = the alternate symbol is |−i 1 to unity) α∗α + β∗β = 1: (4) 0 state (called \plus" on the Bloch sphere) 1 Hence it lives on the complex unit circle, depicted on the j0i = the alternate symbol is j+i: 0 top of Figure 1.
    [Show full text]
  • A New Description of Space and Time Using Clifford Multivectors
    A new description of space and time using Clifford multivectors James M. Chappell† , Nicolangelo Iannella† , Azhar Iqbal† , Mark Chappell‡ , Derek Abbott† †School of Electrical and Electronic Engineering, University of Adelaide, South Australia 5005, Australia ‡Griffith Institute, Griffith University, Queensland 4122, Australia Abstract Following the development of the special theory of relativity in 1905, Minkowski pro- posed a unified space and time structure consisting of three space dimensions and one time dimension, with relativistic effects then being natural consequences of this space- time geometry. In this paper, we illustrate how Clifford’s geometric algebra that utilizes multivectors to represent spacetime, provides an elegant mathematical framework for the study of relativistic phenomena. We show, with several examples, how the application of geometric algebra leads to the correct relativistic description of the physical phenomena being considered. This approach not only provides a compact mathematical representa- tion to tackle such phenomena, but also suggests some novel insights into the nature of time. Keywords: Geometric algebra, Clifford space, Spacetime, Multivectors, Algebraic framework 1. Introduction The physical world, based on early investigations, was deemed to possess three inde- pendent freedoms of translation, referred to as the three dimensions of space. This naive conclusion is also supported by more sophisticated analysis such as the existence of only five regular polyhedra and the inverse square force laws. If we lived in a world with four spatial dimensions, for example, we would be able to construct six regular solids, and in arXiv:1205.5195v2 [math-ph] 11 Oct 2012 five dimensions and above we would find only three [1].
    [Show full text]
  • Math 511 Advanced Linear Algebra Spring 2006
    MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006 Sherod Eubanks HOMEWORK 2 x2:1 : 2; 5; 9; 12 x2:3 : 3; 6 x2:4 : 2; 4; 5; 9; 11 Section 2:1: Unitary Matrices Problem 2 If ¸ 2 σ(U) and U 2 Mn is unitary, show that j¸j = 1. Solution. If ¸ 2 σ(U), U 2 Mn is unitary, and Ux = ¸x for x 6= 0, then by Theorem 2:1:4(g), we have kxkCn = kUxkCn = k¸xkCn = j¸jkxkCn , hence j¸j = 1, as desired. Problem 5 Show that the permutation matrices in Mn are orthogonal and that the permutation matrices form a sub- group of the group of real orthogonal matrices. How many different permutation matrices are there in Mn? Solution. By definition, a matrix P 2 Mn is called a permutation matrix if exactly one entry in each row n and column is equal to 1, and all other entries are 0. That is, letting ei 2 C denote the standard basis n th element of C that has a 1 in the i row and zeros elsewhere, and Sn be the set of all permutations on n th elements, then P = [eσ(1) j ¢ ¢ ¢ j eσ(n)] = Pσ for some permutation σ 2 Sn such that σ(k) denotes the k member of σ. Observe that for any σ 2 Sn, and as ½ 1 if i = j eT e = σ(i) σ(j) 0 otherwise for 1 · i · j · n by the definition of ei, we have that 2 3 T T eσ(1)eσ(1) ¢ ¢ ¢ eσ(1)eσ(n) T 6 .
    [Show full text]
  • Orthogonal Reduction 1 the Row Echelon Form -.: Mathematical
    MATH 5330: Computational Methods of Linear Algebra Lecture 9: Orthogonal Reduction Xianyi Zeng Department of Mathematical Sciences, UTEP 1 The Row Echelon Form Our target is to solve the normal equation: AtAx = Atb ; (1.1) m×n where A 2 R is arbitrary; we have shown previously that this is equivalent to the least squares problem: min jjAx−bjj : (1.2) x2Rn t n×n As A A2R is symmetric positive semi-definite, we can try to compute the Cholesky decom- t t n×n position such that A A = L L for some lower-triangular matrix L 2 R . One problem with this approach is that we're not fully exploring our information, particularly in Cholesky decomposition we treat AtA as a single entity in ignorance of the information about A itself. t m×m In particular, the structure A A motivates us to study a factorization A=QE, where Q2R m×n is orthogonal and E 2 R is to be determined. Then we may transform the normal equation to: EtEx = EtQtb ; (1.3) t m×m where the identity Q Q = Im (the identity matrix in R ) is used. This normal equation is equivalent to the least squares problem with E: t min Ex−Q b : (1.4) x2Rn Because orthogonal transformation preserves the L2-norm, (1.2) and (1.4) are equivalent to each n other. Indeed, for any x 2 R : jjAx−bjj2 = (b−Ax)t(b−Ax) = (b−QEx)t(b−QEx) = [Q(Qtb−Ex)]t[Q(Qtb−Ex)] t t t t t t t t 2 = (Q b−Ex) Q Q(Q b−Ex) = (Q b−Ex) (Q b−Ex) = Ex−Q b : Hence the target is to find an E such that (1.3) is easier to solve.
    [Show full text]
  • Parametrization of 3×3 Unitary Matrices Based on Polarization
    Parametrization of 33 unitary matrices based on polarization algebra (May, 2018) José J. Gil Parametrization of 33 unitary matrices based on polarization algebra José J. Gil Universidad de Zaragoza. Pedro Cerbuna 12, 50009 Zaragoza Spain [email protected] Abstract A parametrization of 33 unitary matrices is presented. This mathematical approach is inspired by polarization algebra and is formulated through the identification of a set of three orthonormal three-dimensional Jones vectors representing respective pure polarization states. This approach leads to the representation of a 33 unitary matrix as an orthogonal similarity transformation of a particular type of unitary matrix that depends on six independent parameters, while the remaining three parameters correspond to the orthogonal matrix of the said transformation. The results obtained are applied to determine the structure of the second component of the characteristic decomposition of a 33 positive semidefinite Hermitian matrix. 1 Introduction In many branches of Mathematics, Physics and Engineering, 33 unitary matrices appear as key elements for solving a great variety of problems, and therefore, appropriate parameterizations in terms of minimum sets of nine independent parameters are required for the corresponding mathematical treatment. In this way, some interesting parametrizations have been obtained [1-8]. In particular, the Cabibbo-Kobayashi-Maskawa matrix (CKM matrix) [6,7], which represents information on the strength of flavour-changing weak decays and depends on four parameters, constitutes the core of a family of parametrizations of a 33 unitary matrix [8]. In this paper, a new general parametrization is presented, which is inspired by polarization algebra [9] through the structure of orthonormal sets of three-dimensional Jones vectors [10].
    [Show full text]
  • Tight Frames and Their Symmetries
    Technical Report 9 December 2003 Tight Frames and their Symmetries Richard Vale, Shayne Waldron Department of Mathematics, University of Auckland, Private Bag 92019, Auckland, New Zealand e–mail: [email protected] (http:www.math.auckland.ac.nz/˜waldron) e–mail: [email protected] ABSTRACT The aim of this paper is to investigate symmetry properties of tight frames, with a view to constructing tight frames of orthogonal polynomials in several variables which share the symmetries of the weight function, and other similar applications. This is achieved by using representation theory to give methods for constructing tight frames as orbits of groups of unitary transformations acting on a given finite-dimensional Hilbert space. Along the way, we show that a tight frame is determined by its Gram matrix and discuss how the symmetries of a tight frame are related to its Gram matrix. We also give a complete classification of those tight frames which arise as orbits of an abelian group of symmetries. Key Words: Tight frames, isometric tight frames, Gram matrix, multivariate orthogonal polynomials, symmetry groups, harmonic frames, representation theory, wavelets AMS (MOS) Subject Classifications: primary 05B20, 33C50, 20C15, 42C15, sec- ondary 52B15, 42C40 0 1. Introduction u1 u2 u3 2 The three equally spaced unit vectors u1, u2, u3 in IR provide the following redundant representation 2 3 f = f, u u , f IR2, (1.1) 3 h ji j ∀ ∈ j=1 X which is the simplest example of a tight frame. Such representations arose in the study of nonharmonic Fourier series in L2(IR) (see Duffin and Schaeffer [DS52]) and have recently been used extensively in the theory of wavelets (see, e.g., Daubechies [D92]).
    [Show full text]
  • Chapter Four Determinants
    Chapter Four Determinants In the first chapter of this book we considered linear systems and we picked out the special case of systems with the same number of equations as unknowns, those of the form T~x = ~b where T is a square matrix. We noted a distinction between two classes of T ’s. While such systems may have a unique solution or no solutions or infinitely many solutions, if a particular T is associated with a unique solution in any system, such as the homogeneous system ~b = ~0, then T is associated with a unique solution for every ~b. We call such a matrix of coefficients ‘nonsingular’. The other kind of T , where every linear system for which it is the matrix of coefficients has either no solution or infinitely many solutions, we call ‘singular’. Through the second and third chapters the value of this distinction has been a theme. For instance, we now know that nonsingularity of an n£n matrix T is equivalent to each of these: ² a system T~x = ~b has a solution, and that solution is unique; ² Gauss-Jordan reduction of T yields an identity matrix; ² the rows of T form a linearly independent set; ² the columns of T form a basis for Rn; ² any map that T represents is an isomorphism; ² an inverse matrix T ¡1 exists. So when we look at a particular square matrix, the question of whether it is nonsingular is one of the first things that we ask. This chapter develops a formula to determine this. (Since we will restrict the discussion to square matrices, in this chapter we will usually simply say ‘matrix’ in place of ‘square matrix’.) More precisely, we will develop infinitely many formulas, one for 1£1 ma- trices, one for 2£2 matrices, etc.
    [Show full text]
  • On the Eigenvalues of Euclidean Distance Matrices
    “main” — 2008/10/13 — 23:12 — page 237 — #1 Volume 27, N. 3, pp. 237–250, 2008 Copyright © 2008 SBMAC ISSN 0101-8205 www.scielo.br/cam On the eigenvalues of Euclidean distance matrices A.Y. ALFAKIH∗ Department of Mathematics and Statistics University of Windsor, Windsor, Ontario N9B 3P4, Canada E-mail: [email protected] Abstract. In this paper, the notion of equitable partitions (EP) is used to study the eigenvalues of Euclidean distance matrices (EDMs). In particular, EP is used to obtain the characteristic poly- nomials of regular EDMs and non-spherical centrally symmetric EDMs. The paper also presents methods for constructing cospectral EDMs and EDMs with exactly three distinct eigenvalues. Mathematical subject classification: 51K05, 15A18, 05C50. Key words: Euclidean distance matrices, eigenvalues, equitable partitions, characteristic poly- nomial. 1 Introduction ( ) An n ×n nonzero matrix D = di j is called a Euclidean distance matrix (EDM) 1, 2,..., n r if there exist points p p p in some Euclidean space < such that i j 2 , ,..., , di j = ||p − p || for all i j = 1 n where || || denotes the Euclidean norm. i , ,..., Let p , i ∈ N = {1 2 n}, be the set of points that generate an EDM π π ( , ,..., ) D. An m-partition of D is an ordered sequence = N1 N2 Nm of ,..., nonempty disjoint subsets of N whose union is N. The subsets N1 Nm are called the cells of the partition. The n-partition of D where each cell consists #760/08. Received: 07/IV/08. Accepted: 17/VI/08. ∗Research supported by the Natural Sciences and Engineering Research Council of Canada and MITACS.
    [Show full text]
  • Lecture 4: April 8, 2021 1 Orthogonality and Orthonormality
    Mathematical Toolkit Spring 2021 Lecture 4: April 8, 2021 Lecturer: Avrim Blum (notes based on notes from Madhur Tulsiani) 1 Orthogonality and orthonormality Definition 1.1 Two vectors u, v in an inner product space are said to be orthogonal if hu, vi = 0. A set of vectors S ⊆ V is said to consist of mutually orthogonal vectors if hu, vi = 0 for all u 6= v, u, v 2 S. A set of S ⊆ V is said to be orthonormal if hu, vi = 0 for all u 6= v, u, v 2 S and kuk = 1 for all u 2 S. Proposition 1.2 A set S ⊆ V n f0V g consisting of mutually orthogonal vectors is linearly inde- pendent. Proposition 1.3 (Gram-Schmidt orthogonalization) Given a finite set fv1,..., vng of linearly independent vectors, there exists a set of orthonormal vectors fw1,..., wng such that Span (fw1,..., wng) = Span (fv1,..., vng) . Proof: By induction. The case with one vector is trivial. Given the statement for k vectors and orthonormal fw1,..., wkg such that Span (fw1,..., wkg) = Span (fv1,..., vkg) , define k u + u = v − hw , v i · w and w = k 1 . k+1 k+1 ∑ i k+1 i k+1 k k i=1 uk+1 We can now check that the set fw1,..., wk+1g satisfies the required conditions. Unit length is clear, so let’s check orthogonality: k uk+1, wj = vk+1, wj − ∑ hwi, vk+1i · wi, wj = vk+1, wj − wj, vk+1 = 0. i=1 Corollary 1.4 Every finite dimensional inner product space has an orthonormal basis.
    [Show full text]
  • Week 8-9. Inner Product Spaces. (Revised Version) Section 3.1 Dot Product As an Inner Product
    Math 2051 W2008 Margo Kondratieva Week 8-9. Inner product spaces. (revised version) Section 3.1 Dot product as an inner product. Consider a linear (vector) space V . (Let us restrict ourselves to only real spaces that is we will not deal with complex numbers and vectors.) De¯nition 1. An inner product on V is a function which assigns a real number, denoted by < ~u;~v> to every pair of vectors ~u;~v 2 V such that (1) < ~u;~v>=< ~v; ~u> for all ~u;~v 2 V ; (2) < ~u + ~v; ~w>=< ~u;~w> + < ~v; ~w> for all ~u;~v; ~w 2 V ; (3) < k~u;~v>= k < ~u;~v> for any k 2 R and ~u;~v 2 V . (4) < ~v;~v>¸ 0 for all ~v 2 V , and < ~v;~v>= 0 only for ~v = ~0. De¯nition 2. Inner product space is a vector space equipped with an inner product. Pn It is straightforward to check that the dot product introduces by ~u ¢ ~v = j=1 ujvj is an inner product. You are advised to verify all the properties listed in the de¯nition, as an exercise. The dot product is also called Euclidian inner product. De¯nition 3. Euclidian vector space is Rn equipped with Euclidian inner product < ~u;~v>= ~u¢~v. De¯nition 4. A square matrix A is called positive de¯nite if ~vT A~v> 0 for any vector ~v 6= ~0. · ¸ 2 0 Problem 1. Show that is positive de¯nite. 0 3 Solution: Take ~v = (x; y)T . Then ~vT A~v = 2x2 + 3y2 > 0 for (x; y) 6= (0; 0).
    [Show full text]
  • Gram Matrix and Orthogonality in Frames 1
    U.P.B. Sci. Bull., Series A, Vol. 80, Iss. 1, 2018 ISSN 1223-7027 GRAM MATRIX AND ORTHOGONALITY IN FRAMES Abolhassan FEREYDOONI1 and Elnaz OSGOOEI 2 In this paper, we aim at introducing a criterion that determines if f figi2I is a Bessel sequence, a frame or a Riesz sequence or not any of these, based on the norms and the inner products of the elements in f figi2I. In the cases of Riesz and Bessel sequences, we introduced a criterion but in the case of a frame, we did not find any answers. This criterion will be shown by K(f figi2I). Using the criterion introduced, some interesting extensions of orthogonality will be presented. Keywords: Frames, Bessel sequences, Orthonormal basis, Riesz bases, Gram matrix MSC2010: Primary 42C15, 47A05. 1. Preliminaries Frames are generalizations of orthonormal bases, but, more than orthonormal bases, they have shown their ability and stability in the representation of functions [1, 4, 10, 11]. The frames have been deeply studied from an abstract point of view. The results of such studies have been used in concrete frames such as Gabor and Wavelet frames which are very important from a practical point of view [2, 9, 5, 8]. An orthonormal set feng in a Hilbert space is characterized by a simple relation hem;eni = dm;n: In the other words, the Gram matrix is the identity matrix. Moreover, feng is an orthonor- mal basis if spanfeng = H. But, for frames the situation is more complicated; i.e., the Gram Matrix has no such a simple form.
    [Show full text]
  • Uniqueness of Low-Rank Matrix Completion by Rigidity Theory
    UNIQUENESS OF LOW-RANK MATRIX COMPLETION BY RIGIDITY THEORY AMIT SINGER∗ AND MIHAI CUCURINGU† Abstract. The problem of completing a low-rank matrix from a subset of its entries is often encountered in the analysis of incomplete data sets exhibiting an underlying factor model with applications in collaborative filtering, computer vision and control. Most recent work had been focused on constructing efficient algorithms for exact or approximate recovery of the missing matrix entries and proving lower bounds for the number of known entries that guarantee a successful recovery with high probability. A related problem from both the mathematical and algorithmic point of view is the distance geometry problem of realizing points in a Euclidean space from a given subset of their pairwise distances. Rigidity theory answers basic questions regarding the uniqueness of the realization satisfying a given partial set of distances. We observe that basic ideas and tools of rigidity theory can be adapted to determine uniqueness of low-rank matrix completion, where inner products play the role that distances play in rigidity theory. This observation leads to efficient randomized algorithms for testing necessary and sufficient conditions for local completion and for testing sufficient conditions for global completion. Crucial to our analysis is a new matrix, which we call the completion matrix, that serves as the analogue of the rigidity matrix. Key words. Low rank matrices, missing values, rigidity theory, iterative methods, collaborative filtering. AMS subject classifications. 05C10, 05C75, 15A48 1. Introduction. Can the missing entries of an incomplete real valued matrix be recovered? Clearly, a matrix can be completed in an infinite number of ways by replacing the missing entries with arbitrary values.
    [Show full text]