Chapter 2 a Short Review of Matrix Algebra

Total Page:16

File Type:pdf, Size:1020Kb

Chapter 2 a Short Review of Matrix Algebra Chapter 2 A short review of matrix algebra 2.1 Vectors and vector spaces Definition 2.1.1. A vector a of dimension n is a collection of n elements typically written as ⎛ ⎞ ⎜ a1 ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ a2 ⎟ a = ⎜ ⎟ =(ai)n. ⎜ . ⎟ ⎝ . ⎠ an Vectors of length 2 (two-dimensional vectors) can be thought of points in 33 BIOS 2083 Linear Models Abdus S. Wahed the plane (See figures). Chapter 2 34 BIOS 2083 Linear Models Abdus S. Wahed Figure 2.1: Vectors in two and three dimensional spaces (-1.5,2) (1, 1) (1, -2) x1 (2.5, 1.5, 0.95) x2 (0, 1.5, 0.95) x3 Chapter 2 35 BIOS 2083 Linear Models Abdus S. Wahed • A vector with all elements equal to zero is known as a zero vector and is denoted by 0. • A vector whose elements are stacked vertically is known as column vector whereas a vector whose elements are stacked horizontally will be referred to as row vector. (Unless otherwise mentioned, all vectors will be referred to as column vectors). • A row vector representation of a column vector is known as its trans- T pose. We will use⎛ the⎞ notation ‘ ’or‘ ’ to indicate a transpose. For ⎜ a1 ⎟ ⎜ ⎟ ⎜ a2 ⎟ ⎜ ⎟ T instance, if a = ⎜ ⎟ and b =(a1 a2 ... an), then we write b = a ⎜ . ⎟ ⎝ . ⎠ an or a = bT . • Vectors of same dimension are conformable to algebraic operations such as additions and subtractions. Sum of two or more vectors of dimension n results in another n-dimensional vector with elements as the sum of the corresponding elements of summand vectors. That is, (ai)n ± (bi)n =(ai ± bi)n. Chapter 2 36 BIOS 2083 Linear Models Abdus S. Wahed • Vectors can be multiplied by a scalar. c(ai)n =(cai)n. • Product of two vectors of same dimension can be formed when one of them is a row vector and the other is a column⎛ vector.⎞ The result⎛ ⎞ is called ⎜ a1 ⎟ ⎜ b1 ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ a2 ⎟ ⎜ b2 ⎟ inner, dot or scalar product.ifa = ⎜ ⎟ and b = ⎜ ⎟,then ⎜ . ⎟ ⎜ . ⎟ ⎝ . ⎠ ⎝ . ⎠ an bn T a b = a1b1 + a2b2 + ...+ anbn. Definition 2.1.2. The length, magnitude, or Euclidean norm of a vec- tor is defined as the square root of the sum of squares of its elements and is denoted by ||.||. For example, n √ || || || || 2 T a = (ai)n = ai = a a. i=1 • The length of the sum of two or more vectors is less than or equal to the sum of the lengths of each vector. (Cauchy-Schwarz Inequality). ||a + b|| ≤ ||a|| + ||b|| Chapter 2 37 BIOS 2083 Linear Models Abdus S. Wahed Definition 2.1.3. A set of vectors {a1, a2,...,am} is linearly dependent if at least one of them can be written as a linear combination of the others. In other words, {a1, a2,...,am} are linearly dependent if there exists at least one non-zero cj such that m cjaj =0. (2.1.1) j=1 In other words, for some k, ak = −(1/ck) cjaj. j=k Definition 2.1.4. A set of vectors are linearly independent if they are not linearly dependent. That is, in order for (2.1.1) to hold, all cj’s must be equal to zero. Chapter 2 38 BIOS 2083 Linear Models Abdus S. Wahed Definition 2.1.5. Two vectors a and b are orthogonal if their scalar prod- uct is zero. That is, aT b = 0, and we write a ⊥ b. Definition 2.1.6. A set of vectors is said to be mutually orthogonal if members of any pair of vectors belonging to the set are orthogonal. • If vectors are mutually orthogonal then they are linearly independent. Chapter 2 39 BIOS 2083 Linear Models Abdus S. Wahed Definition 2.1.7. Vector space. A set of vectors which are closed under addition and scalar multiplication is known as a vector space. Thus if V is a vector space, for any two vectors a and b from V, (i) caa + cbb ∈V, and (ii) caa ∈Vfor any two constants ca and cb. Definition 2.1.8. Span. All possible linear combinations of a set of linearly independent vectors form a Span of that set. Thus if A = {a1, a2,...,am} is a set of m linearly independent vectors, then the span of A is given by m span(A)= a : a = cjaj , j=1 for some numbers cj,j =1, 2,...,m. Viewed differently, the set of vectors A generates the vector space span(A) and is referred to as a basis of span(A). Formally, • Let a1, a2,...,am be a set of m linearly independent n-dimensional vec- tor in a vector space V that spans V.Thena1, a2,...,am together forms a basis of V and the dimension of a vector space is defined by the number of vectors in its basis. That is, dim(V)=m. Chapter 2 40 BIOS 2083 Linear Models Abdus S. Wahed 2.2 Matrix Definition 2.2.1. A matrix is a rectangular or square arrangement of num- bers. A matrix with m rows and n columns is referred to as an m × n (read as ‘m by n’) matrix. An m × n matrix A with (i, j)th element aij is written as ⎡ ⎤ ⎢ a11 a12 ... a1n ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ a21 a22 ... a2n ⎥ A =(aij)m×n = ⎢ ⎥ . ⎢ ··· ··· ··· ⎥ ⎣ ... ⎦ am1 am2 ... amn If m = n then the matrix is a square matrix. Definition 2.2.2. A diagonal matrix is a square matrix with non-zero elements in the diagonal cells and zeros elsewhere. A diagonal matrix with diagonal elements a1,a2,...,an is written as ⎡ ⎤ ⎢ a1 0 ... 0 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ 0 a2 ... 0 ⎥ diag(a1,a2,...,an)=⎢ ⎥ . ⎢ ··· ··· ··· ⎥ ⎣ ... ⎦ 00... an Definition 2.2.3. An n×n diagonal matrix with all diagonal elements equal to 1 is known as identity matrix of order n and is denoted by In. Chapter 2 41 BIOS 2083 Linear Models Abdus S. Wahed A similar notation Jmn is sometimes used for an m × n matrix with all elements equal to 1, i.e., ⎡ ⎤ ⎢ 11... 1 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ 11... 1 ⎥ Jmn = ⎢ ⎥ =[1m 1m ... 1m] . ⎢ ··· ··· ··· ⎥ ⎣ ... ⎦ 11... 1 Like vectors, matrices with the same dimensions can be added together and results in another matrix. Any matrix is conformable to multiplication by a scalar. If A =(aij)m×n and B =(bij)m×n,then 1. A ± B =(aij ± bij)m×n, and 2. cA =(caij)m×n. Definition 2.2.4. The transpose of a matrix A =(aij)m×n is defined by T A =(aji)n×m. • If A = AT ,thenA is symmetric. • (A + B)T =(AT + BT ). Chapter 2 42 BIOS 2083 Linear Models Abdus S. Wahed Definition 2.2.5. Matrix product. If A =(aij)m×n and B =(aij)n×p, then T AB =(cij)m×p,cij = aikbkj = ai bj, k where ai is the ith row (imagine as a vector) of A and bj is the jth column (vector) of B. • (AB)T = BT AT , • (AB)C = A(BC),whenever defined, • A(B + C)=AB + AC, whenever defined, • JmnJnp = nJmp. Chapter 2 43 BIOS 2083 Linear Models Abdus S. Wahed 2.3 Rank, Column Space and Null Space Definition 2.3.1. The rank of a matrix A is the number of linearly inde- pendent rows or columns of A.Wedenoteitbyrank(A). • rank(AT )=rank(A). • An m × n matrix A with with rank m (n)issaidtohavefull row (column) rank. • If A is a square matrix with n rows and rank(A) <n,thenA is singular and the inverse does not exist. • rank(AB) ≤ min(rank(A),rank(B)). • rank(AT A)=rank(AAT )=rank(A)=rank(AT ). Chapter 2 44 BIOS 2083 Linear Models Abdus S. Wahed Definition 2.3.2. Inverse of a square matrix.IfA is a square matrix with n rows and rank(A)=n,thenA is called non-singular and there exists −1 −1 −1 −1 a matrix A such that AA = A A = In. The matrix A is known as the inverse of A. • A−1 is unique. • If A and B are invertible and has the same dimension, then (AB)−1 = B−1A−1. • (cA)−1 = A−1/c. • (AT )−1 =(A−1)T . Chapter 2 45 BIOS 2083 Linear Models Abdus S. Wahed Definition 2.3.3. Column space. The column space of a matrix A is the vector space generated by the columns of A.IfA =(aij)m×n =(a1 a2 ... an, then the column space of A, denoted by C(A)orR(A)isgivenby n C(A)= a : a = cjaj , j=1 for scalars cj,j =1, 2,...,n. Alternatively, a ∈C(A) iff there exists a vector c such that a = Ac. • What is the dimension of the vectors in C(A)? • How many vectors will a basis of C(A)have? • dim(C(A)) =? • If A = BC,thenC(A) ⊆C(B). • If C(A) ⊆C(B), then there exist a matrix C such that A = BC. Example 2.3.1. Find a basis for the column space of the matrix ⎡ ⎤ − − ⎢ 12 1 ⎥ ⎢ ⎥ ⎢ ⎥ A = ⎣ 114⎦ . 022 Chapter 2 46 BIOS 2083 Linear Models Abdus S. Wahed Definition 2.3.4. Null Space.Thenullspaceofanm × n matrix A is de- fined as the vector space consisting of the solution of the system of equations Ax = 0. Null space of A is denoted by N (A) and can be written as N (A)={x : Ax =0} . • What is the dimension of the vectors in N (A)? • How many vectors are there in a basis of N (A)? • dim(N (A)) = n − rank(A) → Nullity of A. Chapter 2 47 BIOS 2083 Linear Models Abdus S.
Recommended publications
  • Stat 5102 Notes: Regression
    Stat 5102 Notes: Regression Charles J. Geyer April 27, 2007 In these notes we do not use the “upper case letter means random, lower case letter means nonrandom” convention. Lower case normal weight letters (like x and β) indicate scalars (real variables). Lowercase bold weight letters (like x and β) indicate vectors. Upper case bold weight letters (like X) indicate matrices. 1 The Model The general linear model has the form p X yi = βjxij + ei (1.1) j=1 where i indexes individuals and j indexes different predictor variables. Ex- plicit use of (1.1) makes theory impossibly messy. We rewrite it as a vector equation y = Xβ + e, (1.2) where y is a vector whose components are yi, where X is a matrix whose components are xij, where β is a vector whose components are βj, and where e is a vector whose components are ei. Note that y and e have dimension n, but β has dimension p. The matrix X is called the design matrix or model matrix and has dimension n × p. As always in regression theory, we treat the predictor variables as non- random. So X is a nonrandom matrix, β is a nonrandom vector of unknown parameters. The only random quantities in (1.2) are e and y. As always in regression theory the errors ei are independent and identi- cally distributed mean zero normal. This is written as a vector equation e ∼ Normal(0, σ2I), where σ2 is another unknown parameter (the error variance) and I is the identity matrix. This implies y ∼ Normal(µ, σ2I), 1 where µ = Xβ.
    [Show full text]
  • DECOMPOSITION of SINGULAR MATRICES INTO IDEMPOTENTS 11 Us Show How to Construct Ai+1, Bi+1, Ci+1, Di+1
    DECOMPOSITION OF SINGULAR MATRICES INTO IDEMPOTENTS ADEL ALAHMADI, S. K. JAIN, AND ANDRE LEROY Abstract. In this paper we provide concrete constructions of idempotents to represent typical singular matrices over a given ring as a product of idempo- tents and apply these factorizations for proving our main results. We generalize works due to Laffey ([12]) and Rao ([3]) to noncommutative setting and fill in the gaps in the original proof of Rao's main theorems (cf. [3], Theorems 5 and 7 and [4]). We also consider singular matrices over B´ezoutdomains as to when such a matrix is a product of idempotent matrices. 1. Introduction and definitions It was shown by Howie [10] that every mapping from a finite set X to itself with image of cardinality ≤ cardX − 1 is a product of idempotent mappings. Erd¨os[7] showed that every singular square matrix over a field can be expressed as a product of idempotent matrices and this was generalized by several authors to certain classes of rings, in particular, to division rings and euclidean domains [12]. Turning to singular elements let us mention two results: Rao [3] characterized, via continued fractions, singular matrices over a commutative PID that can be decomposed as a product of idempotent matrices and Hannah-O'Meara [9] showed, among other results, that for a right self-injective regular ring R, an element a is a product of idempotents if and only if Rr:ann(a) = l:ann(a)R= R(1 − a)R. The purpose of this paper is to provide concrete constructions of idempotents to represent typical singular matrices over a given ring as a product of idempotents and to apply these factorizations for proving our main results.
    [Show full text]
  • Mathematics Study Guide
    Mathematics Study Guide Matthew Chesnes The London School of Economics September 28, 2001 1 Arithmetic of N-Tuples • Vectors specificed by direction and length. The length of a vector is called its magnitude p 2 2 or “norm.” For example, x = (x1, x2). Thus, the norm of x is: ||x|| = x1 + x2. pPn 2 • Generally for a vector, ~x = (x1, x2, x3, ..., xn), ||x|| = i=1 xi . • Vector Order: consider two vectors, ~x, ~y. Then, ~x> ~y iff xi ≥ yi ∀ i and xi > yi for some i. ~x>> ~y iff xi > yi ∀ i. • Convex Sets: A set is convex if whenever is contains x0 and x00, it also contains the line segment, (1 − α)x0 + αx00. 2 2 Vector Space Formulations in Economics • We expect a consumer to have a complete (preference) order over all consumption bundles x in his consumption set. If he prefers x to x0, we write, x x0. If he’s indifferent between x and x0, we write, x ∼ x0. Finally, if he weakly prefers x to x0, we write x x0. • The set X, {x ∈ X : x xˆ ∀ xˆ ∈ X}, is a convex set. It is all bundles of goods that make the consumer at least as well off as with his current bundle. 3 3 Complex Numbers • Define complex numbers as ordered pairs such that the first element in the vector is the real part of the number and the second is complex. Thus, the real number -1 is denoted by (-1,0). A complex number, 2+3i, can be expressed (2,3). • Define multiplication on the complex numbers as, Z · Z0 = (a, b) · (c, d) = (ac − bd, ad + bc).
    [Show full text]
  • Block Diagonalization
    126 (2001) MATHEMATICA BOHEMICA No. 1, 237–246 BLOCK DIAGONALIZATION J. J. Koliha, Melbourne (Received June 15, 1999) Abstract. We study block diagonalization of matrices induced by resolutions of the unit matrix into the sum of idempotent matrices. We show that the block diagonal matrices have disjoint spectra if and only if each idempotent matrix in the inducing resolution dou- ble commutes with the given matrix. Applications include a new characterization of an eigenprojection and of the Drazin inverse of a given matrix. Keywords: eigenprojection, resolutions of the unit matrix, block diagonalization MSC 2000 : 15A21, 15A27, 15A18, 15A09 1. Introduction and preliminaries In this paper we are concerned with a block diagonalization of a given matrix A; by definition, A is block diagonalizable if it is similar to a matrix of the form A1 0 ... 0 0 A2 ... 0 (1.1) =diag(A1,...,Am). ... ... 00... Am Then the spectrum σ(A)ofA is the union of the spectra σ(A1),...,σ(Am), which in general need not be disjoint. The ultimate such diagonalization is the Jordan form of the matrix; however, it is often advantageous to utilize a coarser diagonalization, easier to construct, and customized to a particular distribution of the eigenvalues. The most useful diagonalizations are the ones for which the sets σ(Ai)arepairwise disjoint; it is the aim of this paper to give a full characterization of these diagonal- izations. ∈ n×n For any matrix A we denote its kernel and image by ker A and im A, respectively. A matrix E is idempotent (or a projection matrix )ifE2 = E,and 237 nilpotent if Ep = 0 for some positive integer p.
    [Show full text]
  • Table of Symbols
    Table of Symbols Symbol Meaning Reference ∅ Empty set Page 10 ∈ Member symbol Page 10 ⊆ Subset symbol Page 10 ⊂ Proper subset symbol Page 10 ∩ Intersection symbol Page 10 ∪ Union symbol Page 10 ⊗ Tensor symbol Page 136 ≈ Approximate equality sign Page 79 −−→ PQ Displacement vector Page 147 | z | Absolute value of complex z Page 13 | A | determinant of matrix A Page 115 || u || Norm of vector u Page 212 || u ||p p-norm of vector u Page 306 u · v Standard inner product Page 216 u, v Inner product Page 312 Acof Cofactor matrix of A Page 122 adj A Adjoint of matrix A Page 122 A∗ Conjugate (Hermitian) transpose of matrix A Page 91 AT Transpose of matrix A Page 91 C(A) Column space of matrix A Page 183 cond(A) Condition number of matrix A Page 344 C[a, b] Function space Page 151 C Complex numbers a + bi Page 12 Cn Standard complex vector space Page 149 compv u Component Page 224 z Complex conjugate of z Page 13 δij Kronecker delta Page 65 dim V Dimension of space V Page 194 det A Determinant of A Page 115 domain(T ) Domain of operator T Page 189 diag{λ1,λ2,...,λn} Diagonal matrix with λ1,λ2,...,λn along diagonal Page 103 Eij Elementary operation switch ith and jth rows Page 25 356 Table of Symbols Symbol Meaning Reference Ei(c) Elementary operation multiply ith row by c Page 25 Eij (d) Elementary operation add d times jth row to ith row Page 25 Eλ(A) Eigenspace Page 254 Hv Householder matrix Page 237 I,In Identity matrix, n × n identity Page 65 idV Identity function for V Page 156 (z) Imaginary part of z Page 12 ker(T ) Kernel of operator T Page 188 Mij (A) Minor of A Page 117 M(A) Matrix of minors of A Page 122 max{a1,a2,...,am} Maximum value Page 40 min{a1,a2,...,am} Minimum value Page 40 N (A) Null space of matrix A Page 184 N Natural numbers 1, 2,..
    [Show full text]
  • Facts from Linear Algebra
    Appendix A Facts from Linear Algebra Abstract We introduce the notation of vector and matrices (cf. Section A.1), and recall the solvability of linear systems (cf. Section A.2). Section A.3 introduces the spectrum σ(A), matrix polynomials P (A) and their spectra, the spectral radius ρ(A), and its properties. Block structures are introduced in Section A.4. Subjects of Section A.5 are orthogonal and orthonormal vectors, orthogonalisation, the QR method, and orthogonal projections. Section A.6 is devoted to the Schur normal form (§A.6.1) and the Jordan normal form (§A.6.2). Diagonalisability is discussed in §A.6.3. Finally, in §A.6.4, the singular value decomposition is explained. A.1 Notation for Vectors and Matrices We recall that the field K denotes either R or C. Given a finite index set I, the linear I space of all vectors x =(xi)i∈I with xi ∈ K is denoted by K . The corresponding square matrices form the space KI×I . KI×J with another index set J describes rectangular matrices mapping KJ into KI . The linear subspace of a vector space V spanned by the vectors {xα ∈V : α ∈ I} is denoted and defined by α α span{x : α ∈ I} := aαx : aα ∈ K . α∈I I×I T Let A =(aαβ)α,β∈I ∈ K . Then A =(aβα)α,β∈I denotes the transposed H matrix, while A =(aβα)α,β∈I is the adjoint (or Hermitian transposed) matrix. T H Note that A = A holds if K = R . Since (x1,x2,...) indicates a row vector, T (x1,x2,...) is used for a column vector.
    [Show full text]
  • Econometrics Lecture 1: Review of Matrix Algebra
    Econometrics Lecture 1: Review of Matrix Algebra R. G. Pierse 1 Introduction A matrix is a rectangular array of numbers. If the matrix has n rows and m columns it is said to be an n × m matrix. This is called the dimension of the matrix. A matrix with a single column ( n × 1) is called a column vector.A matrix with a single row ( 1 × m) is called a row vector. A matrix with only one row and one column ( a single number) is called a scalar. The standard convention for denoting a matrix is to use a capital letter in bold typeface as in A, B, C. A column vector is denoted with a lowercase letter in bold typeface as in a, b, c. A row vector is denoted with a lowercase letter in bold typeface, followed by a prime, as in a0, b0, c0. A scalar is generally denoted with a lowercase letter is normal typeface as in a, b, c. An n × m matrix A can be written out explicitly in terms of its elements as in: 2 3 a11 a12 ··· a1j a1m 6 a21 a22 ··· a2j a2m 7 6 7 6 . .. 7 6 . 7 A = 6 7 : 6 ai1 ai2 aij aim 7 6 7 4 5 an1 an2 anj anm Each element has two subscripts: the first is the row index and the second the column index so that aij refers to the element in the ith row and jth column of A. 2 Matrix Operations The standard operations of addition, subtraction and multiplication can be defined for two matrices as long as the dimensions of the matrices satisfy appropriate conditions to ensure that the operation makes sense.
    [Show full text]
  • On the Distance Spectra of Graphs
    On the distance spectra of graphs Ghodratollah Aalipour∗ Aida Abiad† Zhanar Berikkyzy‡ Jay Cummings§ Jessica De Silva¶ Wei Gaok Kristin Heysse‡ Leslie Hogben‡∗∗ Franklin H.J. Kenter†† Jephian C.-H. Lin‡ Michael Tait§ October 5, 2015 Abstract The distance matrix of a graph G is the matrix containing the pairwise distances between vertices. The distance eigenvalues of G are the eigenvalues of its distance matrix and they form the distance spectrum of G. We determine the distance spectra of halved cubes, double odd graphs, and Doob graphs, completing the determination of distance spectra of distance regular graphs having exactly one positive distance eigenvalue. We characterize strongly regular graphs having more positive than negative distance eigenvalues. We give examples of graphs with few distinct distance eigenvalues but lacking regularity properties. We also determine the determinant and inertia of the distance matrices of lollipop and barbell graphs. Keywords. distance matrix, eigenvalue, distance regular graph, Kneser graph, double odd graph, halved cube, Doob graph, lollipop graph, barbell graph, distance spectrum, strongly regular graph, optimistic graph, determinant, inertia, graph AMS subject classifications. 05C12, 05C31, 05C50, 15A15, 15A18, 15B48, 15B57 1 Introduction The distance matrix (G) = [d ] of a graph G is the matrix indexed by the vertices of G where d = d(v , v ) D ij ij i j is the distance between the vertices vi and vj , i.e., the length of a shortest path between vi and vj . Distance matrices were introduced in the study of a data communication problem in [16]. This problem involves finding appropriate addresses so that a message can move efficiently through a series of loops from its origin to its destination, choosing the best route at each switching point.
    [Show full text]
  • Zero-Sum Triangles for Involutory, Idempotent, Nilpotent and Unipotent Matrices
    Zero-Sum Triangles for Involutory, Idempotent, Nilpotent and Unipotent Matrices Pengwei Hao1, Chao Zhang2, Huahan Hao3 Abstract: In some matrix formations, factorizations and transformations, we need special matrices with some properties and we wish that such matrices should be easily and simply generated and of integers. In this paper, we propose a zero-sum rule for the recurrence relations to construct integer triangles as triangular matrices with involutory, idempotent, nilpotent and unipotent properties, especially nilpotent and unipotent matrices of index 2. With the zero-sum rule we also give the conditions for the special matrices and the generic methods for the generation of those special matrices. The generated integer triangles are mostly newly discovered, and more combinatorial identities can be found with them. Keywords: zero-sum rule, triangles of numbers, generation of special matrices, involutory, idempotent, nilpotent and unipotent matrices 1. Introduction Zero-sum was coined in early 1940s in the field of game theory and economic theory, and the term “zero-sum game” is used to describe a situation where the sum of gains made by one person or group is lost in equal amounts by another person or group, so that the net outcome is neutral, or the sum of losses and gains is zero [1]. It was later also frequently used in psychology and political science. In this work, we apply the zero-sum rule to three adjacent cells to construct integer triangles. Pascal's triangle is named after the French mathematician Blaise Pascal, but the history can be traced back in almost 2,000 years and is referred to as the Staircase of Mount Meru in India, the Khayyam triangle in Persia (Iran), Yang Hui's triangle in China, Apianus's Triangle in Germany, and Tartaglia's triangle in Italy [2].
    [Show full text]
  • On the Use of Idempotent Matrices in the Treatment of Linear Restrictions in Regression Analysis
    ... ·.,p.. --r ON THE USE OF IDEMPOTENT MATRICES IN THE TREATMENT OF LINEAR RESTRICTIONS IN REGRESSION ANALYSIS Johns. Chipman M. M. RaJd Technical Report No. 10 University of 1-finnesota and Carnegie Institute of Technology ~ . --• I, University of Minnesota Minneapolis, Minnesota Reproduction in whole or part is pennitted for any purpose of the United States Government. !/ Work done in part under contract Nonr 2582(00), Task NR 042-200 of the Office of Naval Research. ... on the Use of Idempotent Ma.trices in the Treatment of Linear Restrictions in Regression .Analysis1 by John s. Chipman and M. M. Rao University of Minnesota and Carnegie Institute of Technology o. Introduction~ Summary The purpose of the present paper is to present a unified treatment of the problem of estimation and hypothesis testing in linear regression analy­ sis. In sections 2 and 3 below, we consider the estimation of regression coefficients subject to a set of linear restrictions, according to the Markov criterion of best linear unbiasedness and the least squares criterion, respectively. In section 4 we consider the testing of a set of linear ; restrictions on the coefficients of a regression equation. In section 5 we take up the general linear hypothesis, that is, the testing of one set of linear restrictions subject to another set being true. In the course of the treatment of the above problems, idempotent matrices play a natural and central role; accordingly, section 1 is devoted to the derivation of the principal theorems. Since idempotent transformations may be viewed geometrically as projections in linear spaces, secti.on 6 is devoted to the geometric interpretation of the results.
    [Show full text]
  • Spectra of Graphs
    Spectra of graphs Andries E. Brouwer Willem H. Haemers 2 Contents 1 Graph spectrum 11 1.1 Matricesassociatedtoagraph . 11 1.2 Thespectrumofagraph ...................... 12 1.2.1 Characteristicpolynomial . 13 1.3 Thespectrumofanundirectedgraph . 13 1.3.1 Regulargraphs ........................ 13 1.3.2 Complements ......................... 14 1.3.3 Walks ............................. 14 1.3.4 Diameter ........................... 14 1.3.5 Spanningtrees ........................ 15 1.3.6 Bipartitegraphs ....................... 16 1.3.7 Connectedness ........................ 16 1.4 Spectrumofsomegraphs . 17 1.4.1 Thecompletegraph . 17 1.4.2 Thecompletebipartitegraph . 17 1.4.3 Thecycle ........................... 18 1.4.4 Thepath ........................... 18 1.4.5 Linegraphs .......................... 18 1.4.6 Cartesianproducts . 19 1.4.7 Kronecker products and bipartite double. 19 1.4.8 Strongproducts ....................... 19 1.4.9 Cayleygraphs......................... 20 1.5 Decompositions............................ 20 1.5.1 Decomposing K10 intoPetersengraphs . 20 1.5.2 Decomposing Kn into complete bipartite graphs . 20 1.6 Automorphisms ........................... 21 1.7 Algebraicconnectivity . 22 1.8 Cospectralgraphs .......................... 22 1.8.1 The4-cube .......................... 23 1.8.2 Seidelswitching. 23 1.8.3 Godsil-McKayswitching. 24 1.8.4 Reconstruction ........................ 24 1.9 Verysmallgraphs .......................... 24 1.10 Exercises ............................... 25 3 4 CONTENTS 2 Linear algebra 29 2.1
    [Show full text]
  • EXPLICIT BOUNDS for the PSEUDOSPECTRA of VARIOUS CLASSES of MATRICES and OPERATORS Contents 1. Introduction 2 2. Pseudospectra 2
    EXPLICIT BOUNDS FOR THE PSEUDOSPECTRA OF VARIOUS CLASSES OF MATRICES AND OPERATORS FEIXUE GONG1, OLIVIA MEYERSON2, JEREMY MEZA3, ABIGAIL WARD4 MIHAI STOICIU5 (ADVISOR) SMALL 2014 - MATHEMATICAL PHYSICS GROUP N×N Abstract. We study the "-pseudospectra σ"(A) of square matrices A ∈ C . We give a complete characterization of the "-pseudospectrum of any 2×2 matrix and describe the asymptotic behavior (as " → 0) of σ"(A) for any square matrix A. We also present explicit upper and lower bounds for the "-pseudospectra of bidiagonal and tridiagonal matrices, as well as for finite rank operators. Contents 1. Introduction 2 2. Pseudospectra 2 2.1. Motivation and Definitions 2 2.2. Normal Matrices 6 2.3. Non-normal Diagonalizable Matrices 7 2.4. Non-diagonalizable Matrices 8 3. Pseudospectra of 2 2 Matrices 10 3.1. Non-diagonalizable× 2 2 Matrices 10 3.2. Diagonalizable 2 2 Matrices× 12 4. Asymptotic Union of Disks× Theorem 15 5. Pseudospectra of N N Jordan Block 20 6. Pseudospectra of bidiagonal× matrices 22 6.1. Asymptotic Bounds for Periodic Bidiagonal Matrices 22 6.2. Examples of Bidiagonal Matrices 29 7. Tridiagonal Matrices 32 Date: October 30, 2014. 1;2;5Williams College 3Carnegie Mellon University 4University of Chicago. 1 2 GONG, MEYERSON, MEZA, WARD, STOICIU 8. Finite Rank operators 33 References 35 1. Introduction The pseudospectra of matrices and operators is an important mathematical object that has found applications in various areas of mathematics: linear algebra, func- tional analysis, numerical analysis, and differential equations. An overview of the main results on pseudospectra can be found in [14].
    [Show full text]