Lecture 5: Inference in OLS Regression Ordinary Least Squares: Matrix Form

Total Page:16

File Type:pdf, Size:1020Kb

Lecture 5: Inference in OLS Regression Ordinary Least Squares: Matrix Form Explained Variance in Multiple Regression Lecture 5: Inference in OLS Regression Ordinary Least Squares: Matrix Form Dave Armstrong Properties of the OLS Estimator University of Wisconsin – Milwaukee Department of Political Science Statistical Inference for OLS e: [email protected] w: www.quantoid.net/ICPSR.php Example: Duncan Data Model (Mis)Specification 1/54 2/54 Partial Relationshps Table : OLS Regressions of Occupational Prestige Model 1 Model 2 Model 3 Model 4 Intercept 27.141⇤ 10.732⇤ 48.693⇤ 6.794⇤ (2.268)− (3.677) (2.308)− (3.239) Partial relationships are those that “control” for the e↵ects of other Mean income of incumbents (in $1000) 2.897⇤ 1.314⇤ variables. How does this “controlling” happen? (0.283) (0.278) Mean incumbent years of education 5.361⇤ 4.187⇤ • Controlling happens by removing the parts of X1 and Y that are (0.332) (0.389) Percentage of women incumbents 0.064 0.009 explained by X2,...,Xk when calculating the slope coefficient. −(0.054)− (0.030) • Note that if X1 can be perfectly predicted by X2,...,Xk,thenthere N 102 102 102 102 R2 0.511 0.723 0.014 0.798 will be nothing left to be related to Y. adj. R2 0.506 0.720 0.004 0.792 • Because of the way that controlling happens, it is impossible to say Resid. sd 12.090 9.103 17.169 7.846 Main entries are OLS coefficients how much variance each variable uniquely explains, because there Standard errors in parentheses are overlaps in the variance of the X variables. ⇤ indicates singificance at p < 0.05 (two-tailed) 3/54 4/54 Partial Relationships (2) Partial Relationships in R To find the partial relationship, we can just use multiple regression, or we >EP<-lm(prestige~education+women, could do the following: +data=Prestige)$residuals • First, partial out the e↵ects of education and women on prestige >EI<-lm(income~education+women, +data=Prestige)$residuals P Prestigei = A1 + B1Educationi + D1Womeni + Ei >partial.mod<-lm(EP~EI) >coef(mod4) • Then, partial out the e↵ect of education and women on income. (Intercept) income education women I -6.794334203 1.313560428 4.186637275 -0.008905157 Incomei = A2 + B2Educationi + D2Womeni + Ei • Finally, calculate the simple regression: >coef(partial.mod) (Intercept) EI P I Ei = A + BEi + Ei 8.190603e-16 1.313560e+00 5/54 6/54 Vectors and Matrices Explained Variance in Multiple Regression Ordinary Least Squares: Matrix Form • A vector is a listing of numbers in a particular order. We can have Properties of the OLS Estimator 1 2 row-vectors = [1, 2, 3, 4] or column vectors = v v 2 3 3 Statistical Inference for OLS 6 7 6 4 7 6 7 • The ordering of the numbers matters, thus if v =6 [4,73, 2, 1],then ⇤ 46 57 Example: Duncan Data v , v⇤. Model (Mis)Specification 7/54 8/54 Vectors Vectors in R Consider the following two fectors u = [3, 3, 3, 3] (or u = [u , u , u , u ]) We already know how to make vectors in R, it’s just with the c() 1 2 3 4 command. and v = [1, 2, 3, 4] (or v = [v1, v2, v3, v4]). >u<-c(3,3,3,3) • u + v = [u1 + v1, u2 + v2, u3 + v3, u4, v4] = [3 + 1, 3 + 2, 3 + 3, 3 + 4] = [4, 5, 6, 7] >v<-c(1,2,3,4) >u+v • u v = [u1 v1, u2 v2, u3 v3, u4, v4] = [3− 1, 3 2−, 3 3, 3− 4] =−[2, 1, 0, 1] [1] 4 5 6 7 − − − − − • Scalar multiplication and division simply result in the elements of >u-v the vector being multiplied or divided by the scalar. [1] 2 1 0 -1 9/54 10 / 54 Conformability Inner Product The inner product (or dot product) of vectors is an important type of calculation. Vectors can only be added, subtracted, etc... if they are “conformable”. • The vectors have to be of the same size. • The inner product of two vectors results in a scalar (i.e., a single • In the examples above (addition and subtraction) this amounts to number) the vectors having the same length. If vectors do not have the same • For example: u v = [u v + u v + u v + u v ] with the data above: length, they cannot be added together. · 1 1 2 2 3 3 4 4 • This will amount to something di↵erent when we talk about u v = [3 1 + 3 2 + 3 3 + 3 4] = [3 + 6 + 9 + 12] = 30 multiplication of matrices. · ⇥ ⇥ ⇥ ⇥ >u%*%v [,1] [1,] 30 11 / 54 12 / 54 Outer Product I Outer Product II Where the inner-product of vectors generates a scalar (a single-value), the outer-product generates a matrix. • When u is 1 k and v0 is k 1.Theresultis1 1 because the result • A row vector has dimensions 1 k and a column-vector has is the sum of⇥ the product of⇥ pairwise elements⇥ of each row (of which dimensions k 1. ⇥ ⇥ there is 1) and each column (of which there is 1). • In matrix multiplication, as we’ll see shortly, matrices are conformable if they have the same inner-dimension. So, we could The outer product switches these two: u0v.Sinceu0 is k 1 and v is , the result is a matrix with dimensions . ⇥ multiply something 1 k by something that was k n where n is 1 k k k ⇥ ⇥ ⇥ ⇥ some positive integer. When should do the same with vectors. 3 3 6 9 12 • Both u and v are row-vectors above. To make one of them a 3 3 6 9 12 0 = [1, 2, 3, 4] = (1) column-vector, we will have to transpose it. Transposing u v 2 3 3 2 3 6 9 12 3 6 7 6 7 interchanges the rows and columns of a matrix/vector. We can 6 3 7 6 3 6 9 12 7 6 7 6 7 indicate transposition by v0. 6 7 6 7 6 7 6 7 • To be perfectly correct above, we should have multiplied u v . 4 5 4 5 · 0 13 / 54 14 / 54 Outer Product in R Matrices >u%o%v [,1] [,2] [,3] [,4] A matrix is a rectangular arrangement of numbers. Matrices have two [1,] 3 6 9 12 dimensions - the number of rows and the number of columns. The [2,] 3 6 9 12 ordering in both the rows and columns of the matrix matters. For [3,] 3 6 9 12 example: [4,] 3 6 9 12 12 = (2) >outer(u,v,"*") X 34 [,1] [,2] [,3] [,4] " # is a 2 2 matrix. We could say x[2, 1] = x = 3, that is the second [1,] 3 6 9 12 X 21 row, first⇥ column of = 3. [2,] 3 6 9 12 X [3,] 3 6 9 12 [4,] 3 6 9 12 15 / 54 16 / 54 Special matrices Matrix Addition/Subtraction We can also do math on matrices. For addition and subtraction, the • A square matrix is one with the same number of rows and columns. matrices must be of the same order (that is, they must have the same The rows and columns needn’t be the same, but the number of dimensions). elements does need to be the same. x11 x12 y11 y12 x11 + y11 x21 + y21 • A symmetric matrix is one where xij = x ji. X = , Y = , X + Y = x21 x22 y21 y22 x21 + y21 x22 + y22 " # " # " # • A diagonal matrix is a square matrix where xij is non-zero when 12 56 i = j and is zero when i , j. To test, add = and = . X 34 Y 78 • An identity matrix is a diagonal matrix where all diagonal elements " # " # 68 equal 1. Answer: . 10 12 " # 17 / 54 18 / 54 Matrices in R Matrix Multiplication I Multiplication is where things get a bit tricky, but you might be able to see where we’re going. When you enter data into a matrix in R, you have to specify how many x x y y X = 11 12 , Y = 11 12 rows and columns it has. By default, R fills in the matrix by columns. x21 x22 y21 y22 " # " # >x<-matrix(c(1,3,2,4),ncol=2,nrow=2) To figure out what XY is, let’s think about breaking down X into >y<-matrix(c(5,7,6,8),ncol=2,nrow=2) row-vectors and Y into column vectors. >x+y • The first row-vector of X is x1 = x11 x12 [,1] [,2] · [1,] 6 8 • The second row-vector of X is x2 h= x21 x22i [2,] 10 12 · h y11 i • The first column vector of Y is y 1 = · y21 " # y12 • The second column vector of Y is y 2 = · y22 " # 19 / 54 20 / 54 Matrix Multiplication II Properties of Matrix Multiplication We can find the product of X and Y by taking inner-products of the • When we see XY,thewecansaythatY is pre-multiplied by X or appropriate row- and column-vectors. the X is post-multiplied by Y. • The order of the multiplication matters here, so XY , YX x1 y 1 x1 y 2 XY = · · · · · · Much like vectors, matrices can be transposed as well. Here, we x2 y 1 x2 y 2 · · · · · · " # interchange the rows and columns, so that xij = x0ji. Sometimes, >x%*%y matrices need to be transposed to make them conformable for [,1] [,2] multiplication or addition. The properties of the transpose are as follows: [1,] 19 22 • (X0)0 = X [2,] 43 50 • (X + Y)0 = Y0 + X0 th th The ij element of XY is the inner product of the i row of X and the • (XY)0 = Y0 X0 jth column of Y.
Recommended publications
  • Final Exam Outline 1.1, 1.2 Scalars and Vectors in R N. the Magnitude
    Final Exam Outline 1.1, 1.2 Scalars and vectors in Rn. The magnitude of a vector. Row and column vectors. Vector addition. Scalar multiplication. The zero vector. Vector subtraction. Distance in Rn. Unit vectors. Linear combinations. The dot product. The angle between two vectors. Orthogonality. The CBS and triangle inequalities. Theorem 1.5. 2.1, 2.2 Systems of linear equations. Consistent and inconsistent systems. The coefficient and augmented matrices. Elementary row operations. The echelon form of a matrix. Gaussian elimination. Equivalent matrices. The reduced row echelon form. Gauss-Jordan elimination. Leading and free variables. The rank of a matrix. The rank theorem. Homogeneous systems. Theorem 2.2. 2.3 The span of a set of vectors. Linear dependence. Linear dependence and independence. Determining n whether a set {~v1, . ,~vk} of vectors in R is linearly dependent or independent. 3.1, 3.2 Matrix operations. Scalar multiplication, matrix addition, matrix multiplication. The zero and identity matrices. Partitioned matrices. The outer product form. The standard unit vectors. The transpose of a matrix. A system of linear equations in matrix form. Algebraic properties of the transpose. 3.3 The inverse of a matrix. Invertible (nonsingular) matrices. The uniqueness of the solution to Ax = b when A is invertible. The inverse of a nonsingular 2 × 2 matrix. Elementary matrices and EROs. Equivalent conditions for invertibility. Using Gauss-Jordan elimination to invert a nonsingular matrix. Theorems 3.7, 3.9. n n n 3.5 Subspaces of R . {0} and R as subspaces of R . The subspace span(v1,..., vk). The the row, column and null spaces of a matrix.
    [Show full text]
  • Introduction to MATLAB®
    Introduction to MATLAB® Lecture 2: Vectors and Matrices Introduction to MATLAB® Lecture 2: Vectors and Matrices 1 / 25 Vectors and Matrices Vectors and Matrices Vectors and matrices are used to store values of the same type A vector can be either column vector or a row vector. Matrices can be visualized as a table of values with dimensions r × c (r is the number of rows and c is the number of columns). Introduction to MATLAB® Lecture 2: Vectors and Matrices 2 / 25 Vectors and Matrices Creating row vectors Place the values that you want in the vector in square brackets separated by either spaces or commas. e.g 1 >> row vec=[1 2 3 4 5] 2 row vec = 3 1 2 3 4 5 4 5 >> row vec=[1,2,3,4,5] 6 row vec= 7 1 2 3 4 5 Introduction to MATLAB® Lecture 2: Vectors and Matrices 3 / 25 Vectors and Matrices Creating row vectors - colon operator If the values of the vectors are regularly spaced, the colon operator can be used to iterate through these values. (first:last) produces a vector with all integer entries from first to last e.g. 1 2 >> row vec = 1:5 3 row vec = 4 1 2 3 4 5 Introduction to MATLAB® Lecture 2: Vectors and Matrices 4 / 25 Vectors and Matrices Creating row vectors - colon operator A step value can also be specified with another colon in the form (first:step:last) 1 2 >>odd vec = 1:2:9 3 odd vec = 4 1 3 5 7 9 Introduction to MATLAB® Lecture 2: Vectors and Matrices 5 / 25 Vectors and Matrices Exercise In using (first:step:last), what happens if adding the step value would go beyond the range specified by last? e.g: 1 >>v = 1:2:6 Introduction to MATLAB® Lecture 2: Vectors and Matrices 6 / 25 Vectors and Matrices Exercise Use (first:step:last) to generate the vector v1 = [9 7 5 3 1 ]? Introduction to MATLAB® Lecture 2: Vectors and Matrices 7 / 25 Vectors and Matrices Creating row vectors - linspace function linspace (Linearly spaced vector) 1 >>linspace(x,y,n) linspace creates a row vector with n values in the inclusive range from x to y.
    [Show full text]
  • Math 102 -- Linear Algebra I -- Study Guide
    Math 102 Linear Algebra I Stefan Martynkiw These notes are adapted from lecture notes taught by Dr.Alan Thompson and from “Elementary Linear Algebra: 10th Edition” :Howard Anton. Picture above sourced from (http://i.imgur.com/RgmnA.gif) 1/52 Table of Contents Chapter 3 – Euclidean Vector Spaces.........................................................................................................7 3.1 – Vectors in 2-space, 3-space, and n-space......................................................................................7 Theorem 3.1.1 – Algebraic Vector Operations without components...........................................7 Theorem 3.1.2 .............................................................................................................................7 3.2 – Norm, Dot Product, and Distance................................................................................................7 Definition 1 – Norm of a Vector..................................................................................................7 Definition 2 – Distance in Rn......................................................................................................7 Dot Product.......................................................................................................................................8 Definition 3 – Dot Product...........................................................................................................8 Definition 4 – Dot Product, Component by component..............................................................8
    [Show full text]
  • Lesson 1 Introduction to MATLAB
    Math 323 Linear Algebra and Matrix Theory I Fall 1999 Dr. Constant J. Goutziers Department of Mathematical Sciences [email protected] Lesson 1 Introduction to MATLAB MATLAB is an interactive system whose basic data element is an array that does not require dimensioning. This allows easy solution of many technical computing problems, especially those with matrix and vector formulations. For many colleges and universities MATLAB is the Linear Algebra software of choice. The name MATLAB stands for matrix laboratory. 1.1 Row and Column vectors, Formatting In linear algebra we make a clear distinction between row and column vectors. • Example 1.1.1 Entering a row vector. The components of a row vector are entered inside square brackets, separated by blanks. v=[1 2 3] v = 1 2 3 • Example 1.1.2 Entering a column vector. The components of a column vector are also entered inside square brackets, but they are separated by semicolons. w=[4; 5/2; -6] w = 4.0000 2.5000 -6.0000 • Example 1.1.3 Switching between a row and a column vector, the transpose. In MATLAB it is possible to switch between row and column vectors using an apostrophe. The process is called "taking the transpose". To illustrate the process we print v and w and generate the transpose of each of these vectors. Observe that MATLAB commands can be entered on the same line, as long as they are separated by commas. v, vt=v', w, wt=w' v = 1 2 3 vt = 1 2 3 w = 4.0000 2.5000 -6.0000 wt = 4.0000 2.5000 -6.0000 • Example 1.1.4 Formatting.
    [Show full text]
  • Linear Algebra I: Vector Spaces A
    Linear Algebra I: Vector Spaces A 1 Vector spaces and subspaces 1.1 Let F be a field (in this book, it will always be either the field of reals R or the field of complex numbers C). A vector space V D .V; C; o;˛./.˛2 F// over F is a set V with a binary operation C, a constant o and a collection of unary operations (i.e. maps) ˛ W V ! V labelled by the elements of F, satisfying (V1) .x C y/ C z D x C .y C z/, (V2) x C y D y C x, (V3) 0 x D o, (V4) ˛ .ˇ x/ D .˛ˇ/ x, (V5) 1 x D x, (V6) .˛ C ˇ/ x D ˛ x C ˇ x,and (V7) ˛ .x C y/ D ˛ x C ˛ y. Here, we write ˛ x and we will write also ˛x for the result ˛.x/ of the unary operation ˛ in x. Often, one uses the expression “multiplication of x by ˛”; but it is useful to keep in mind that what we really have is a collection of unary operations (see also 5.1 below). The elements of a vector space are often referred to as vectors. In contrast, the elements of the field F are then often referred to as scalars. In view of this, it is useful to reflect for a moment on the true meaning of the axioms (equalities) above. For instance, (V4), often referred to as the “associative law” in fact states that the composition of the functions V ! V labelled by ˇ; ˛ is labelled by the product ˛ˇ in F, the “distributive law” (V6) states that the (pointwise) sum of the mappings labelled by ˛ and ˇ is labelled by the sum ˛ C ˇ in F, and (V7) states that each of the maps ˛ preserves the sum C.
    [Show full text]
  • Week 1 – Vectors and Matrices
    Week 1 – Vectors and Matrices Richard Earl∗ Mathematical Institute, Oxford, OX1 2LB, October 2003 Abstract Algebra and geometry of vectors. The algebra of matrices. 2x2 matrices. Inverses. Determinants. Simultaneous linear equations. Standard transformations of the plane. Notation 1 The symbol R2 denotes the set of ordered pairs (x, y) – that is the xy-plane. Similarly R3 denotes the set of ordered triples (x, y, z) – that is, three-dimensional space described by three co-ordinates x, y and z –andRn denotes a similar n-dimensional space. 1Vectors A vector can be thought of in two different ways. Let’s for the moment concentrate on vectors in the xy-plane. From one point of view a vector is just an ordered pair of numbers (x, y). We can associate this • vector with the point in R2 which has co-ordinates x and y. We call this vector the position vector of the point. From the second point of view a vector is a ‘movement’ or translation. For example, to get from • the point (3, 4) to the point (4, 5) we need to move ‘one to the right and one up’; this is the same movement as is required to move from ( 2, 3) to ( 1, 2) or from (1, 2) to (2, 1) . Thinking about vectors from this second point of− view,− all three− of− these movements− are the− same vector, because the same translation ‘one right, one up’ achieves each of them, even though the ‘start’ and ‘finish’ are different in each case. We would write this vector as (1, 1) .
    [Show full text]
  • Linear Algebra: Matrices, Vectors, Determinants. Linear Systems
    c07.qxd 10/28/10 7:30 PM Page 256 CHAPTER 7 Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear algebra is a fairly extensive subject that covers vectors and matrices, determinants, systems of linear equations, vector spaces and linear transformations, eigenvalue problems, and other topics. As an area of study it has a broad appeal in that it has many applications in engineering, physics, geometry, computer science, economics, and other areas. It also contributes to a deeper understanding of mathematics itself. Matrices, which are rectangular arrays of numbers or functions, and vectors are the main tools of linear algebra. Matrices are important because they let us express large amounts of data and functions in an organized and concise form. Furthermore, since matrices are single objects, we denote them by single letters and calculate with them directly. All these features have made matrices and vectors very popular for expressing scientific and mathematical ideas. The chapter keeps a good mix between applications (electric networks, Markov processes, traffic flow, etc.) and theory. Chapter 7 is structured as follows: Sections 7.1 and 7.2 provide an intuitive introduction to matrices and vectors and their operations, including matrix multiplication. The next block of sections, that is, Secs. 7.3–7.5 provide the most important method for solving systems of linear equations by the Gauss elimination method. This method is a cornerstone of linear algebra, and the method itself and variants of it appear in different areas of mathematics and in many applications. It leads to a consideration of the behavior of solutions and concepts such as rank of a matrix, linear independence, and bases.
    [Show full text]
  • Chapter 9 Matrices and Determinants
    Chapter 9 222 Matrices and Determinants Chapter 9 Matrices and Determinants 9.1 Introduction: In many economic analysis, variables are assumed to be related by sets of linear equations. Matrix algebra provides a clear and concise notation for the formulation and solution of such problems, many of which would be complicated in conventional algebraic notation. The concept of determinant and is based on that of matrix. Hence we shall first explain a matrix. 9.2 Matrix: A set of mn numbers (real or complex), arranged in a rectangular formation (array or table) having m rows and n columns and enclosed by a square bracket [ ] is called mn matrix (read “m by n matrix”) . An mn matrix is expressed as a11 a 12 a 1n a21 a 22 a 2n A= am1 a m2 a mn The letters aij stand for real numbers. Note that aij is the element in the ith row and jth column of the matrix .Thus the matrix A is sometimes denoted by simplified form as (aij) or by {aij} i.e., A = (aij) Matrices are usually denoted by capital letters A, B, C etc and its elements by small letters a, b, c etc. Order of a Matrix: The order or dimension of a matrix is the ordered pair having as first component the number of rows and as second component the number of columns in the matrix. If there are 3 rows and 2 columns in a matrix, then its order is written as (3, 2) or (3 x 2) read as three by two.
    [Show full text]
  • Quantum Algorithms Via Linear Algebra: a Primer / Richard J
    QUANTUM ALGORITHMS VIA LINEAR ALGEBRA A Primer Richard J. Lipton Kenneth W. Regan The MIT Press Cambridge, Massachusetts London, England c 2014 Massachusetts Institute of Technology All rights reserved. No part of this book may be reproduced in any form or by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher. MIT Press books may be purchased at special quantity discounts for business or sales promotional use. For information, please email special [email protected]. This book was set in Times Roman and Mathtime Pro 2 by the authors, and was printed and bound in the United States of America. Library of Congress Cataloging-in-Publication Data Lipton, Richard J., 1946– Quantum algorithms via linear algebra: a primer / Richard J. Lipton and Kenneth W. Regan. p. cm. Includes bibliographical references and index. ISBN 978-0-262-02839-4 (hardcover : alk. paper) 1. Quantum computers. 2. Computer algorithms. 3. Algebra, Linear. I. Regan, Kenneth W., 1959– II. Title QA76.889.L57 2014 005.1–dc23 2014016946 10987654321 We dedicate this book to all those who helped create and nourish the beautiful area of quantum algorithms, and to our families who helped create and nourish us. RJL and KWR Contents Preface xi Acknowledgements xiii 1 Introduction 1 1.1 The Model 2 1.2 The Space and the States 3 1.3 The Operations 5 1.4 Where Is the Input? 6 1.5 What Exactly Is the Output? 7 1.6 Summary and Notes 8 2 Numbers and Strings 9 2.1 Asymptotic Notation
    [Show full text]
  • Computational Foundations of Cognitive Science Lecture 11: Matrices in Matlab
    Basic Matrix Operations Special Matrices Matrix Products Transpose, Inner and Outer Product Computational Foundations of Cognitive Science Lecture 11: Matrices in Matlab Frank Keller School of Informatics University of Edinburgh [email protected] February 23, 2010 Frank Keller Computational Foundations of Cognitive Science 1 Basic Matrix Operations Special Matrices Matrix Products Transpose, Inner and Outer Product 1 Basic Matrix Operations Sum and Difference Size; Product with Scalar 2 Special Matrices Zero and Identity Matrix Diagonal and Triangular Matrices Block Matrices 3 Matrix Products Row and Column Vectors Mid-lecture Problem Matrix Product Product with Vector 4 Transpose, Inner and Outer Product Transpose Symmetric Matrices Inner and Outer Product Reading: McMahon, Ch. 2 Frank Keller Computational Foundations of Cognitive Science 2 Basic Matrix Operations Special Matrices Sum and Difference Matrix Products Size; Product with Scalar Transpose, Inner and Outer Product Sum and Difference In Matlab, matrices are input as lists of numbers; columns are separated by spaces or commas, rows by semicolons or newlines: > A = [2, 1, 0, 3; -1, 0, 2, 4; 4, -2, 7, 0]; > B = [-4, 3, 5, 1 2, 2, 0, -1 3, 2, -4, 5]; > C = [1 1; 2 2]; The sum and difference of two matrices can be computed using the operators + and -: > disp(A + B); -2 4 5 4 1 2 2 3 7 0 3 5 Frank Keller Computational Foundations of Cognitive Science 3 Basic Matrix Operations Special Matrices Sum and Difference Matrix Products Size; Product with Scalar Transpose, Inner and Outer Product
    [Show full text]
  • INTRODUCTION to MATRIX ALGEBRA 1.1. Definition of a Matrix
    INTRODUCTION TO MATRIX ALGEBRA 1. DEFINITION OF A MATRIX AND A VECTOR 1.1. Definition of a matrix. A matrix is a rectangular array of numbers arranged into rows and columns. It is written as a11 a12 ... a1n a a ... a 21 22 2n .. (1) .. .. am1 am2 ... amn The above array is called an m by n (m x n) matrix since it has m rows and n columns. Typically upper-case letters are used to denote a matrix and lower case letters with subscripts the elements. The matrix A is also often denoted A = kaijk (2) 1.2. Definition of a vector. A vector is a n-tuple of numbers. In R2 a vector would be an ordered 3 n pair of numbers {x, y}.InR a vector is a 3-tuple, i.e., {x1,x2,x3}. Similarly for R . Vectors are usually denoted by lower case letters such as a or b, or more formally ~a or ~b. 1.3. Row and column vectors. 1.3.1. Row vector. A matrix with one row and n columns (1xn) is called a row vector. It is usually written ~x 0 or 0 ~x = x1,x2,x3, ..., xn (3) The use of the prime 0 symbol indicates we are writing the n-tuple horizontally as if it were the row of a matrix. Note that each row of a matrix is a row vector. 1.3.2. Column vector. A matrix with one column and n rows (nx1) is called a column vector. It is written as x1 x 2 x3 ~x = .
    [Show full text]
  • 1.1. Elementary Matrices, Row and Column Transformations. We Have
    1. ROW OPERATION AND COLUMN OPERATIONS 1.1. Elementary Matrices, Row and Column Transformations. We have already studied in the last section that we can view the matrix multiplication as linear combi- nation of column vectors of the first matrix, or row vectors of the second matrix. And the coefficient of matrix multiplication is exactly given by other matrix. This shows that to understand matrix multiplication, we have to study the linear combination of row and column vectors. In this section, we will study the most basic linear combination of rows and columns, row and column transformation. 1.1.1. Elementary Row transformation. We have three types of row transformation. row switching This transformation swiches two row of matrix. 0 1 4 8 1 0 3 3 5 1 r1$r3 Switch the 1st and 3rd row of matrix @ 2 0 9 A −−−−! @ 2 0 9 A 3 3 5 1 4 8 row multiplying This transformation multiplies some row with a non-zero scalar λ 0 1 4 8 1 0 1 4 8 1 2×r2 Multiply the 2nd row of matrix by 2 : @ 2 0 9 A −−−! @ 4 0 18 A 3 3 5 3 3 5 row adding In this transformation, we multiply some row by a scalar, but add that into another row. 0 1 4 8 1 0 1 4 8 1 r3+2×r2 Add twice of the 2nd row to the 3rd row : @ 2 0 9 A −−−−−! @ 2 0 9 A 3 3 5 7 3 23 Caution: Write 2 × r instead of r × 2, the reason for that is simple, because scalar is 1 × 1 matrix.
    [Show full text]