Mathematics for Economists

Total Page:16

File Type:pdf, Size:1020Kb

Mathematics for Economists RS –Ch4-1 –Linear Algebra Mathematics for Economists Chapters 4-5 Linear Models and Matrix Algebra The Nine Chapters on the Mathematical Art Johann Carl Friedrich Gauss (1777–1855) (1000-200 BC) Math for Economists & Finance Modern economics and finance is analytical and mathematical in structure. Modern economics theory is presented in terms of mathematical economic models, with a set mathematical relationship used to clarify assumptions and implications. Within the Social Sciences, economics and finance are fields where math is a natural tool. Economics and Finance deal with a lot of metric concepts: Prices, quantities, income, interest rates, etc. Math allows economists to form systematic relations between metric concepts and study the implications of these relations. Math allows us to develop and test testable propositions. Remember: Math econ is like love – a simple idea but it can get complicated. 2 1 RS –Ch4-1 –Linear Algebra 4. Linear Algebra Some early history: The beginnings of matrices and determinants goes back to the second century BC although traces can be seen back to the fourth century BC. But, the ideas did not make it to mainstream math until the late 16th century The Babylonians around 300 BC studied problems which lead to simultaneous linear equations. The Chinese, between 200 BC and 100 BC, came much closer to matrices than the Babylonians. Indeed, the text Nine Chapters on the Mathematical Art written during the Han Dynasty gives the first known example of matrix methods. In Europe, 2x2 determinants were considered by Cardano at the end of the 16th century and larger ones by Leibniz and, in Japan, by Seki about 100 years later. 4. What is a Matrix? A matrix is a set of elements, organized into rows and columns rows a b columns c d • a and d are the diagonal elements. • b and c are the off-diagonal elements. • Matrices are like plain numbers in many ways: they can be added, subtracted, and, in some cases, multiplied and inverted (divided). Arthur Cayley (1821 – 1895, England) 2 RS –Ch4-1 –Linear Algebra 4. Matrix: Details Examples: a11 a 21 A ; b b1 b 2 b3 a12 a 22 • Dimensions of a matrix: numbers of rows by numbers of columns. The Matrix A is a 2x2 matrix, b is a 1x3 matrix. • A matrix with only 1 column or only 1 row is called a vector. • If a matrix has an equal numbers of rows and columns, it is called a square matrix. Matrix A, above, is a square matrix. • Usual Notation: Upper case letters matrices Lower case vectors 5 4. Matrix: Geometric Intuition In n-dimensional spaces with a fixed a coordinate system (origin), an n-dimensional vector, ⃗, is an arrow from the origin to a point in space whose coordinates correspond to the elements of ⃗. ⃗ ⃗ As such, vectors have direction and magnitude (a scalar). The notation, ⃗, incorporates the first feature. 6 3 RS –Ch4-1 –Linear Algebra 4. Matrix: Geometric Intuition If no origin is fixed, a vector represents displacements, that is, the position of one point relative to another. The vector = ⃗ represents the displacement from point A to point B. If the same displacement ⃗ is applied from point C, we end up in D. Alternatively, a vector is a way of measuring an amount of an oriented 1-dimensional subspace of a vector space. That is, it picks out an oriented line through an origin and specifies a length. 7 4. Matrix: Geometric Intuition A "bivector" is a way of measuring an amount of an oriented 2- dimensional subspace of a vector space. That is, it picks out an oriented plane through the origin and specifies an area. Two (non- parallel) vectors give a bivector (a matrix of rank 2!). Similarly, there are “trivectors,” and so on. But, in economics and finance, the geometric interpretation is not very intuitive. We use vectors and matrices as a tool to organize and manipulate information. 8 4 RS –Ch4-1 –Linear Algebra 4. Matrix: Information Information is described by data. A tool to organize the data is a list, which we call a vector. Lists of lists are called matrices. That is, we organize the data using matrices. We think of the elements of X as data points (“data entries”, “observations”), in economics, we usually have numerical data. We store the data in rows. In a Txk matrix, X, over time we build a database: ⋯ row 1 = k entries at time 1 X ⋮⋱⋮ ⋯ row T = k entries at time T Once the data is organized in matrices it can be easily manipulated: multiplied, added, etc. (this is what Excel does). 9 4. Matrix: Details In econometrics, we have a model y = f(x1, x2, ... xk), which we want to estimate. We collect data, say T (or N) observations, on a dependent variable, y, and on k explanatory variables, X. Under the usual notation, vectors will be column vectors: y and xk are Tx1 vectors: ⋮ ⋮ &xj j = 1,..., k ⋯ X is a Txk matrix: X ⋮⋱⋮ ⋯ Its columns are the kTx1 vectors xj. It is common to treat x1 as vector of ones, ί. 10 5 RS –Ch4-1 –Linear Algebra 4.1 Special Matrices: Identity and Null • Identity Matrix: A square matrix with 1’s along 1 0 0 the diagonal and 0’s everywhere else. Similar to 0 1 0 scalar “1.” 0 0 1 • Null matrix: A matrix in which all elements are 0 0 0 0’s. Similar to scalar “0.” 0 0 0 0 0 0 • Both are diagonal matrices off-diagonal elements are zero. Both are examples of symmetric and idempotent matrices. As we will see later: - Symmetric: A = AT 2 3 - Idempotent: A = A = A = … 11 4.1 Matrix: Elementary Row Operations • Elementary row operations: – Switching: Swap the positions of two rows – Multiplication: Multiply a row by a non-zero scalar – Addition: Add to one row a scalar multiple of another. • An elementary matrix is a matrix which differs from the identity matrix by one single elementary row operation. • If the matrix subject to elementary row operations is associated to a system of linear equations, then these operations do not change the solution set. Row operations can make the problem easier. • Elementary row operations are used in Gaussian elimination to reduce a matrix to row echelon form. 12 6 RS –Ch4-1 –Linear Algebra 4.1 Matrix multiplication: Details Multiplication of matrices requires a conformability condition The conformability condition for multiplication is that the column dimensions of the lead matrix A must be equal to the row dimension of the lag matrix B. If A is an (mxn) and B an (nxp) matrix (A has the same number of columns as B has rows), then we define the product of AB. n a b AB is (mxp) matrix with its ij-th element is j1 ij jk What are the dimensions of the vector, matrix, and result? b11 b12 b13 aB a a c c c c 11 12 b b 11 12 13 21 22 23 a11b11 a12b21 a11b12 a12b22 a11b13 a12b23 • Dimensions: a(1x2), B(2x3) c(1x3) 13 4.1 Transpose Matrix The transpose of a matrix A is another matrix AT (also written A′) created by any one of the following equivalent actions: –write the rows (columns) of A as the columns (rows) of AT –reflect A by its main diagonal to obtain AT Formally, the (i,j) element of AT is the (j,i) element of A: T [A ]ij = [A]ji If A is a m × n matrix AT is a n × m matrix. (A')' = A Conformability changes unless the matrix is square. 3 1 3 8 9 A A Example : 8 0 1 0 4 9 4 14 7 RS –Ch4-1 –Linear Algebra 4.1 Transpose Matrix: Example – X’ In econometrics, an important matrix is X’X. Recall X: ⋯ X ⋮⋱⋮ a (Txk) matrix ⋯ Then, ⋯ X’ ⋮⋱⋮ a (kxT) matrix ⋯ 15 4.1 Basic Operations Addition, Subtraction, Multiplication a b e f ae b f Just add elements c d g h cg d h a b e f ae b f Just subtract elements c d g h cg d h a be f ae bg af bh Multiply each row by c dg h ce dg cf dh each column and add a b ka kb k Multiply each c d kc kd element by the scalar 16 8 RS –Ch4-1 –Linear Algebra 4.1 Basic Matrix Operations: Examples 2 1 3 1 5 2 Matrix addition 7 9 0 2 7 11 A2x2 B2x2 C2x2 Matrix subtraction 2 1 1 0 1 1 7 9 2 3 5 6 Matrix multiplication 2 1 1 0 4 3 x 7 9 2 3 26 27 A xB C Scalar multiplication 2x2 2x2 2x2 1 2 4 1 4 1 2 17 8 6 1 3 4 1 8 17 4.1 Basic Matrix Operations: X′X A special matrix in econometrics, X′X (a kxk matrix): ⋯ ⋯ Recall X (Txk): X= ⋮⋱⋮& X’ ⋮⋱⋮ ⋯ ⋯ ∑ ⋯ ∑ X’ X ⋮⋱⋮= ∑ ⋯ ∑ ⋯ = ∑ ⋮⋱⋮= ⋯ ∑ ⋮ ⋯ = ∑ ′ 18 9 RS –Ch4-1 –Linear Algebra 4.1 Basic Matrix Operations: ί′X Recall ί is a column vector of ones (in this case, a Tx1 vector): 1 1 ί = … 1 Given X (Txk), then ί’X is a 1xk vector: ⋯ ί’X 1…1 ⋮⋱⋮= ∑ … ∑ ⋯ Note: If x1 is a vector of ones (representing a constant in the linear classical model), then: ί’x1 = ∑ = ∑ 1 = T (“dot product”) 19 4.1 Basic Matrix Operations: R Many ways to create a vector (c, 2:7, seq, rep, etc) or a matrix (c, cbind, rbind).
Recommended publications
  • Introduction to Linear Bialgebra
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of New Mexico University of New Mexico UNM Digital Repository Mathematics and Statistics Faculty and Staff Publications Academic Department Resources 2005 INTRODUCTION TO LINEAR BIALGEBRA Florentin Smarandache University of New Mexico, [email protected] W.B. Vasantha Kandasamy K. Ilanthenral Follow this and additional works at: https://digitalrepository.unm.edu/math_fsp Part of the Algebra Commons, Analysis Commons, Discrete Mathematics and Combinatorics Commons, and the Other Mathematics Commons Recommended Citation Smarandache, Florentin; W.B. Vasantha Kandasamy; and K. Ilanthenral. "INTRODUCTION TO LINEAR BIALGEBRA." (2005). https://digitalrepository.unm.edu/math_fsp/232 This Book is brought to you for free and open access by the Academic Department Resources at UNM Digital Repository. It has been accepted for inclusion in Mathematics and Statistics Faculty and Staff Publications by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected], [email protected], [email protected]. INTRODUCTION TO LINEAR BIALGEBRA W. B. Vasantha Kandasamy Department of Mathematics Indian Institute of Technology, Madras Chennai – 600036, India e-mail: [email protected] web: http://mat.iitm.ac.in/~wbv Florentin Smarandache Department of Mathematics University of New Mexico Gallup, NM 87301, USA e-mail: [email protected] K. Ilanthenral Editor, Maths Tiger, Quarterly Journal Flat No.11, Mayura Park, 16, Kazhikundram Main Road, Tharamani, Chennai – 600 113, India e-mail: [email protected] HEXIS Phoenix, Arizona 2005 1 This book can be ordered in a paper bound reprint from: Books on Demand ProQuest Information & Learning (University of Microfilm International) 300 N.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Partitioned (Or Block) Matrices This Version: 29 Nov 2018
    Partitioned (or Block) Matrices This version: 29 Nov 2018 Intermediate Econometrics / Forecasting Class Notes Instructor: Anthony Tay It is frequently convenient to partition matrices into smaller sub-matrices. e.g. 2 3 2 1 3 2 3 2 1 3 4 1 1 0 7 4 1 1 0 7 A B (2×2) (2×3) 3 1 1 0 0 = 3 1 1 0 0 = C I 1 3 0 1 0 1 3 0 1 0 (3×2) (3×3) 2 0 0 0 1 2 0 0 0 1 The same matrix can be partitioned in several different ways. For instance, we can write the previous matrix as 2 3 2 1 3 2 3 2 1 3 4 1 1 0 7 4 1 1 0 7 a b0 (1×1) (1×4) 3 1 1 0 0 = 3 1 1 0 0 = c D 1 3 0 1 0 1 3 0 1 0 (4×1) (4×4) 2 0 0 0 1 2 0 0 0 1 One reason partitioning is useful is that we can do matrix addition and multiplication with blocks, as though the blocks are elements, as long as the blocks are conformable for the operations. For instance: A B D E A + D B + E (2×2) (2×3) (2×2) (2×3) (2×2) (2×3) + = C I C F 2C I + F (3×2) (3×3) (3×2) (3×3) (3×2) (3×3) A B d E Ad + BF AE + BG (2×2) (2×3) (2×1) (2×3) (2×1) (2×3) = C I F G Cd + F CE + G (3×2) (3×3) (3×1) (3×3) (3×1) (3×3) | {z } | {z } | {z } (5×5) (5×4) (5×4) 1 Intermediate Econometrics / Forecasting 2 Examples (1) Let 1 2 1 1 2 1 c 1 4 2 3 4 2 3 h i A = = = a a a and c = c 1 2 3 2 3 0 1 3 0 1 c 0 1 3 0 1 3 3 c1 h i then Ac = a1 a2 a3 c2 = c1a1 + c2a2 + c3a3 c3 The product Ac produces a linear combination of the columns of A.
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Algebra of Linear Transformations and Matrices Math 130 Linear Algebra
    Then the two compositions are 0 −1 1 0 0 1 BA = = 1 0 0 −1 1 0 Algebra of linear transformations and 1 0 0 −1 0 −1 AB = = matrices 0 −1 1 0 −1 0 Math 130 Linear Algebra D Joyce, Fall 2013 The products aren't the same. You can perform these on physical objects. Take We've looked at the operations of addition and a book. First rotate it 90◦ then flip it over. Start scalar multiplication on linear transformations and again but flip first then rotate 90◦. The book ends used them to define addition and scalar multipli- up in different orientations. cation on matrices. For a given basis β on V and another basis γ on W , we have an isomorphism Matrix multiplication is associative. Al- γ ' φβ : Hom(V; W ) ! Mm×n of vector spaces which though it's not commutative, it is associative. assigns to a linear transformation T : V ! W its That's because it corresponds to composition of γ standard matrix [T ]β. functions, and that's associative. Given any three We also have matrix multiplication which corre- functions f, g, and h, we'll show (f ◦ g) ◦ h = sponds to composition of linear transformations. If f ◦ (g ◦ h) by showing the two sides have the same A is the standard matrix for a transformation S, values for all x. and B is the standard matrix for a transformation T , then we defined multiplication of matrices so ((f ◦ g) ◦ h)(x) = (f ◦ g)(h(x)) = f(g(h(x))) that the product AB is be the standard matrix for S ◦ T .
    [Show full text]
  • Handout 9 More Matrix Properties; the Transpose
    Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric).
    [Show full text]
  • Week 8-9. Inner Product Spaces. (Revised Version) Section 3.1 Dot Product As an Inner Product
    Math 2051 W2008 Margo Kondratieva Week 8-9. Inner product spaces. (revised version) Section 3.1 Dot product as an inner product. Consider a linear (vector) space V . (Let us restrict ourselves to only real spaces that is we will not deal with complex numbers and vectors.) De¯nition 1. An inner product on V is a function which assigns a real number, denoted by < ~u;~v> to every pair of vectors ~u;~v 2 V such that (1) < ~u;~v>=< ~v; ~u> for all ~u;~v 2 V ; (2) < ~u + ~v; ~w>=< ~u;~w> + < ~v; ~w> for all ~u;~v; ~w 2 V ; (3) < k~u;~v>= k < ~u;~v> for any k 2 R and ~u;~v 2 V . (4) < ~v;~v>¸ 0 for all ~v 2 V , and < ~v;~v>= 0 only for ~v = ~0. De¯nition 2. Inner product space is a vector space equipped with an inner product. Pn It is straightforward to check that the dot product introduces by ~u ¢ ~v = j=1 ujvj is an inner product. You are advised to verify all the properties listed in the de¯nition, as an exercise. The dot product is also called Euclidian inner product. De¯nition 3. Euclidian vector space is Rn equipped with Euclidian inner product < ~u;~v>= ~u¢~v. De¯nition 4. A square matrix A is called positive de¯nite if ~vT A~v> 0 for any vector ~v 6= ~0. · ¸ 2 0 Problem 1. Show that is positive de¯nite. 0 3 Solution: Take ~v = (x; y)T . Then ~vT A~v = 2x2 + 3y2 > 0 for (x; y) 6= (0; 0).
    [Show full text]
  • New Foundations for Geometric Algebra1
    Text published in the electronic journal Clifford Analysis, Clifford Algebras and their Applications vol. 2, No. 3 (2013) pp. 193-211 New foundations for geometric algebra1 Ramon González Calvet Institut Pere Calders, Campus Universitat Autònoma de Barcelona, 08193 Cerdanyola del Vallès, Spain E-mail : [email protected] Abstract. New foundations for geometric algebra are proposed based upon the existing isomorphisms between geometric and matrix algebras. Each geometric algebra always has a faithful real matrix representation with a periodicity of 8. On the other hand, each matrix algebra is always embedded in a geometric algebra of a convenient dimension. The geometric product is also isomorphic to the matrix product, and many vector transformations such as rotations, axial symmetries and Lorentz transformations can be written in a form isomorphic to a similarity transformation of matrices. We collect the idea Dirac applied to develop the relativistic electron equation when he took a basis of matrices for the geometric algebra instead of a basis of geometric vectors. Of course, this way of understanding the geometric algebra requires new definitions: the geometric vector space is defined as the algebraic subspace that generates the rest of the matrix algebra by addition and multiplication; isometries are simply defined as the similarity transformations of matrices as shown above, and finally the norm of any element of the geometric algebra is defined as the nth root of the determinant of its representative matrix of order n. The main idea of this proposal is an arithmetic point of view consisting of reversing the roles of matrix and geometric algebras in the sense that geometric algebra is a way of accessing, working and understanding the most fundamental conception of matrix algebra as the algebra of transformations of multiple quantities.
    [Show full text]
  • Matrix Determinants
    MATRIX DETERMINANTS Summary Uses ................................................................................................................................................. 1 1‐ Reminder ‐ Definition and components of a matrix ................................................................ 1 2‐ The matrix determinant .......................................................................................................... 2 3‐ Calculation of the determinant for a matrix ................................................................. 2 4‐ Exercise .................................................................................................................................... 3 5‐ Definition of a minor ............................................................................................................... 3 6‐ Definition of a cofactor ............................................................................................................ 4 7‐ Cofactor expansion – a method to calculate the determinant ............................................... 4 8‐ Calculate the determinant for a matrix ........................................................................ 5 9‐ Alternative method to calculate determinants ....................................................................... 6 10‐ Exercise .................................................................................................................................... 7 11‐ Determinants of square matrices of dimensions 4x4 and greater ........................................
    [Show full text]
  • 6 Inner Product Spaces
    Lectures 16,17,18 6 Inner Product Spaces 6.1 Basic Definition Parallelogram law, the ability to measure angle between two vectors and in particular, the concept of perpendicularity make the euclidean space quite a special type of a vector space. Essentially all these are consequences of the dot product. Thus, it makes sense to look for operations which share the basic properties of the dot product. In this section we shall briefly discuss this. Definition 6.1 Let V be a vector space. By an inner product on V we mean a binary operation, which associates a scalar say u, v for each pair of vectors u, v) in V, satisfying h i the following properties for all u, v, w in V and α, β any scalar. (Let “ ” denote the complex − conjugate of a complex number.) (1) u, v = v, u (Hermitian property or conjugate symmetry); h i h i (2) u, αv + βw = α u, v + β u, w (sesquilinearity); h i h i h i (3) v, v > 0 if v =0 (positivity). h i 6 A vector space with an inner product is called an inner product space. Remark 6.1 (i) Observe that we have not mentioned whether V is a real vector space or a complex vector space. The above definition includes both the cases. The only difference is that if K = R then the conjugation is just the identity. Thus for real vector spaces, (1) will becomes ‘symmetric property’ since for a real number c, we have c¯ = c. (ii) Combining (1) and (2) we obtain (2’) αu + βv, w =α ¯ u, w + β¯ v, w .
    [Show full text]
  • The Classical Matrix Groups 1 Groups
    The Classical Matrix Groups CDs 270, Spring 2010/2011 The notes provide a brief review of matrix groups. The primary goal is to motivate the lan- guage and symbols used to represent rotations (SO(2) and SO(3)) and spatial displacements (SE(2) and SE(3)). 1 Groups A group, G, is a mathematical structure with the following characteristics and properties: i. the group consists of a set of elements {gj} which can be indexed. The indices j may form a finite, countably infinite, or continous (uncountably infinite) set. ii. An associative binary group operation, denoted by 0 ∗0 , termed the group product. The product of two group elements is also a group element: ∀ gi, gj ∈ G gi ∗ gj = gk, where gk ∈ G. iii. A unique group identify element, e, with the property that: e ∗ gj = gj for all gj ∈ G. −1 iv. For every gj ∈ G, there must exist an inverse element, gj , such that −1 gj ∗ gj = e. Simple examples of groups include the integers, Z, with addition as the group operation, and the real numbers mod zero, R − {0}, with multiplication as the group operation. 1.1 The General Linear Group, GL(N) The set of all N × N invertible matrices with the group operation of matrix multiplication forms the General Linear Group of dimension N. This group is denoted by the symbol GL(N), or GL(N, K) where K is a field, such as R, C, etc. Generally, we will only consider the cases where K = R or K = C, which are respectively denoted by GL(N, R) and GL(N, C).
    [Show full text]
  • Matrix Multiplication. Diagonal Matrices. Inverse Matrix. Matrices
    MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix. Matrices Definition. An m-by-n matrix is a rectangular array of numbers that has m rows and n columns: a11 a12 ... a1n a21 a22 ... a2n . .. . am1 am2 ... amn Notation: A = (aij )1≤i≤n, 1≤j≤m or simply A = (aij ) if the dimensions are known. Matrix algebra: linear operations Addition: two matrices of the same dimensions can be added by adding their corresponding entries. Scalar multiplication: to multiply a matrix A by a scalar r, one multiplies each entry of A by r. Zero matrix O: all entries are zeros. Negative: −A is defined as (−1)A. Subtraction: A − B is defined as A + (−B). As far as the linear operations are concerned, the m×n matrices can be regarded as mn-dimensional vectors. Properties of linear operations (A + B) + C = A + (B + C) A + B = B + A A + O = O + A = A A + (−A) = (−A) + A = O r(sA) = (rs)A r(A + B) = rA + rB (r + s)A = rA + sA 1A = A 0A = O Dot product Definition. The dot product of n-dimensional vectors x = (x1, x2,..., xn) and y = (y1, y2,..., yn) is a scalar n x · y = x1y1 + x2y2 + ··· + xnyn = xk yk . Xk=1 The dot product is also called the scalar product. Matrix multiplication The product of matrices A and B is defined if the number of columns in A matches the number of rows in B. Definition. Let A = (aik ) be an m×n matrix and B = (bkj ) be an n×p matrix.
    [Show full text]