Compositions of Linear Transformations

Total Page:16

File Type:pdf, Size:1020Kb

Compositions of Linear Transformations (c)2015 UM Math Dept licensed under a Creative Commons By-NC-SA 4.0 International License. Math 217: x2.3 Composition of Linear Transformations Professor Karen Smith1 Inquiry: Is the composition of linear transformations a linear transformation? If so, what is its matrix? 2 T 3 3 S 2 A. Let R −! R and R −! R be two linear transformations. 1. Prove that the composition S ◦ T is a linear transformation (using the definition!). What is its source vector space? What is its target vector space? 2 2 Solution note: The source of S◦T is R and the target is also R . The proof that S◦T is linear: We need to check that S ◦ T respect addition and also scalar multiplication. 2 First, note for any ~x;~y 2 R , we have S ◦ T (~x + ~y) = S(T (~x + ~y)) = S(T (~x)) + S(T (~y)) = S ◦ T (~x) + S ◦ T (~y): Here, the second and third equal signs come from the linearity of T and S, respectively. 2 Next, note that for any ~x 2 R and any scalar k, we have S ◦ T (k~x) = S(T (k~x)) = S(kT (~x)) = kS(T (~x)) = kS ◦ T (~x); so S ◦ T also respects scalar multiplication. The second and third equal signs again are justified by the linearity of T and S, respectively. So S ◦ T respects both addition and scalar multiplication, so it is linear. 2 1 2 3 1 0 1 2. Suppose that the matrix of T is A = 0 −1 and the matrix of S is B = : 4 5 0 −1 0 −1 3 x Compute explicitly a formula for S(T ( )): y 2 x + 2y 3 x x + 2y + (−x + 3y) 5y Solution note: S ◦ T ( ) = S( −y ) = = y 4 5 y y −x + 3y 3. What is S ◦ T (~e1)? S ◦ T (~e2)? 0 5 Solution note: S ◦ T (~e ) = and S ◦ T (~e ) = . 1 0 2 1 4. Find the matrix of the composition S ◦ T . Compare to (2). 0 5 Solution note: . Note that the columns are the vectors we computed in (2). 0 1 1Thanks to Anthony Zheng, Section 5 Winter 2016, for finding numerous errors in the solutions. 5. Compute the matrix product BA. Compare to (4). What do you notice? 0 5 Solution note: BA = . The same! 0 1 6. What about T ◦ S? What is its matrix in terms of A and B. Solution note: This is AB. 7. What is the general principle here? Say we have a composition of linear transformations n TA m TB p R −! R −! R given by matrix multiplication by matrices A and B respectively. State and prove a precise theorem about the matrix of the composition. Be very careful about the order of multiplication! n TA m TB p Solution note: Theorem: If R −! R −! R are linear transformations given by matrix multiplication by matrices A and B (on the left) respectively, then the composition TB ◦ TA has matrix BA. n Proof: For any ~x 2 R , we have TB ◦ TA(~x) = TB(TA(~x)) = TB(A~x) = BA~x = (BA)~x: Here, every equality uses a definition or basic property of matrix multiplication (the first is definition of composition, the second is definition of TA, the third is definition of TB, the fourth is the association property of matrix multiplication). 1 a 1 c B. Let A = ; and C = : 0 1 d 1 1. Compute AC. Compute CA. What do you notice? 2. Does matrix multiplication satisfy the commutative law? n n 3. TRUE or FALSE: If we have two linear transformations, S and T , both from R ! R , then S ◦ T = T ◦ S. ad + 1 a + c 1 a + c Solution note: AC = ;CA = : These are not equal in d 1 d ad + 1 general, so matrix multiplication does not satisfy the commutative law! In particular, linear transformations do not satisfy the commutative law either, so (3) is FALSE. 1 1 An explicit countexample is to let S be left multiplication by ; and T be mul- 0 1 1 0 1 1 tiplication by : Then TS is multiplication by ; but ST is multiplication 1 1 1 2 2 1 by : These are not the same maps, since for example, they take different values 1 1 on ~e1. n T n C. The identity transformation is the map R −! R doing nothing: it sends every vector ~x to ~x.A linear transformation T is invertible if there exists a linear transformation S such that T ◦ S is the identity map (on the source of S) and S ◦ T is the identity map (on the source of T ). 1. What is the matrix of the identity transformation? Prove it! n T m 2. If R −! R is invertible, what can we say about its matrix? Solution note: The matrix of the identity transformation is In. To prove it, note that the identity transformation takes ~ei to ~ei, and that these are the columns of the identity matrix. So the identity matrix is the unique matrix of the identity map. If T is invertible, then the matrix of T is invertible. Math 217: x2.3 Block Multiplication Professor Karen Smith D. In the book's Theorem 2.3.9, we see that we can think about matrices in \blocks" (for example, a 4×4 matrix may be thought of as being composed of four 2×2 blocks), and then we can multiply as though the blocks were scalars using Theorem 2.3.4. This is a surprisingly useful result! 21 1 1 13 60 1 0 17 C11 C12 1. Consider the matrix C = 6 7. If we write this as a block matrix, C = , 40 0 1 05 C21 C22 0 0 0 1 where all the blocks are the same size, what are the blocks Cij? 1 1 Solution note: One way is: C = C = , C = 0 (the 2 × 2 zero matrix), 11 12 0 1 21 2×2 and C22 = I2 (the 2 × 2 identity matrix). D 2. Suppose we want to calculate the product CD, where D is the block matrix D = 1 , with D2 D1 and D2 each being a 2×2 block. Write the product in terms of Cij and Dk by multiplying the blocks as if they were scalars, as suggested by Theorem 2.3.9. Solution note: We have C C D C D + C D CD = 11 12 1 = 11 1 12 2 : C21 C22 D2 C21D1 + C21D2 Calculate 3. Compute the product AB where 21 0 0 1 0 0 1 0 03 2 a p x 0 0 0 1 0 03 60 1 0 0 1 0 0 1 07 6 b q y 0 0 0 0 1 07 6 7 6 7 60 0 1 0 0 1 0 0 17 6 c r z 0 0 0 0 0 17 6 7 6 7 61 0 0 1 0 0 1 0 07 6 1 2 3 1 0 0 1 0 07 6 7 6 7 A = 60 1 0 0 1 0 0 1 07 and B = 6 1 2 3 0 1 0 0 1 07 : 6 7 6 7 60 0 1 0 0 1 0 0 17 6 1 2 3 0 0 1 0 0 17 6 7 6 7 61 0 0 1 0 0 1 0 07 6−a −p −x 0 0 0 1 0 07 6 7 6 7 40 1 0 0 1 0 0 1 05 4−b −q −y 0 0 0 0 1 05 0 0 1 0 0 1 0 0 1 −c −r −z 0 0 0 0 0 1 [Hint: Be clever, take advantage of lurking identity matrices and block multiplication.] Solution note: Break this up, sudoku-like, into nine 3 × 3 blocks. Note, in A, almost all these are identity matrices or zero matrices, so the multiplication is especially easy. C C 4. With C as in (1), find another way to break C up as = 11 12 , but where the blocks are C21 C22 not the same size. Solution note: You could take C11 to be 1 × 1 and C22 to be 3 × 3. So C11 = 1, 203 21 0 13 C12 = 1 1 1 ;C21 = 405 ; and C22 = 40 1 05. It's easiest to see what I mean 0 0 0 1 by drawing in two lines cutting the matrix up into blocks. AB F. Consider a matrix M = where A is 3 × 5 and D is 7 × 1 (so M is in block form). CD 1. What are the sizes of B and C? What is the size of M? Solution note: M is 10 × 6, B is 3 × 1 and C is 7 × 5. A0 B0 2. Suppose N = ? What are the possible dimensions of N so that the product MN is C0 D0 defined? In this case, what are the dimensions of the smaller blocks A0;B0;C0 and D0 so that the product can be computed using a block-product? Solution note: The only restrictions are as follows: A0 and B0 must have 5 rows, and C0 and D0 must have one row. Also, A0 and C0 must have the same number of columns, as must B0 and D0. So A0 is 5 × n, B0 is 5 × m, C0 is 1 × n and D0 is 1 × m. 3. What are the possible dimensions of N so that the product NM is defined? In this case, what are the dimensions of the smaller blocks A0;B0;C0 and D0 that would allow us to compute this as a block product? Solution note: For this, the number of columns of N must equal the number of rows of M, so N must by p × 10.
Recommended publications
  • Introduction to Linear Bialgebra
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of New Mexico University of New Mexico UNM Digital Repository Mathematics and Statistics Faculty and Staff Publications Academic Department Resources 2005 INTRODUCTION TO LINEAR BIALGEBRA Florentin Smarandache University of New Mexico, [email protected] W.B. Vasantha Kandasamy K. Ilanthenral Follow this and additional works at: https://digitalrepository.unm.edu/math_fsp Part of the Algebra Commons, Analysis Commons, Discrete Mathematics and Combinatorics Commons, and the Other Mathematics Commons Recommended Citation Smarandache, Florentin; W.B. Vasantha Kandasamy; and K. Ilanthenral. "INTRODUCTION TO LINEAR BIALGEBRA." (2005). https://digitalrepository.unm.edu/math_fsp/232 This Book is brought to you for free and open access by the Academic Department Resources at UNM Digital Repository. It has been accepted for inclusion in Mathematics and Statistics Faculty and Staff Publications by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected], [email protected], [email protected]. INTRODUCTION TO LINEAR BIALGEBRA W. B. Vasantha Kandasamy Department of Mathematics Indian Institute of Technology, Madras Chennai – 600036, India e-mail: [email protected] web: http://mat.iitm.ac.in/~wbv Florentin Smarandache Department of Mathematics University of New Mexico Gallup, NM 87301, USA e-mail: [email protected] K. Ilanthenral Editor, Maths Tiger, Quarterly Journal Flat No.11, Mayura Park, 16, Kazhikundram Main Road, Tharamani, Chennai – 600 113, India e-mail: [email protected] HEXIS Phoenix, Arizona 2005 1 This book can be ordered in a paper bound reprint from: Books on Demand ProQuest Information & Learning (University of Microfilm International) 300 N.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • On Multivariate Interpolation
    On Multivariate Interpolation Peter J. Olver† School of Mathematics University of Minnesota Minneapolis, MN 55455 U.S.A. [email protected] http://www.math.umn.edu/∼olver Abstract. A new approach to interpolation theory for functions of several variables is proposed. We develop a multivariate divided difference calculus based on the theory of non-commutative quasi-determinants. In addition, intriguing explicit formulae that connect the classical finite difference interpolation coefficients for univariate curves with multivariate interpolation coefficients for higher dimensional submanifolds are established. † Supported in part by NSF Grant DMS 11–08894. April 6, 2016 1 1. Introduction. Interpolation theory for functions of a single variable has a long and distinguished his- tory, dating back to Newton’s fundamental interpolation formula and the classical calculus of finite differences, [7, 47, 58, 64]. Standard numerical approximations to derivatives and many numerical integration methods for differential equations are based on the finite dif- ference calculus. However, historically, no comparable calculus was developed for functions of more than one variable. If one looks up multivariate interpolation in the classical books, one is essentially restricted to rectangular, or, slightly more generally, separable grids, over which the formulae are a simple adaptation of the univariate divided difference calculus. See [19] for historical details. Starting with G. Birkhoff, [2] (who was, coincidentally, my thesis advisor), recent years have seen a renewed level of interest in multivariate interpolation among both pure and applied researchers; see [18] for a fairly recent survey containing an extensive bibli- ography. De Boor and Ron, [8, 12, 13], and Sauer and Xu, [61, 10, 65], have systemati- cally studied the polynomial case.
    [Show full text]
  • What's in a Name? the Matrix As an Introduction to Mathematics
    St. John Fisher College Fisher Digital Publications Mathematical and Computing Sciences Faculty/Staff Publications Mathematical and Computing Sciences 9-2008 What's in a Name? The Matrix as an Introduction to Mathematics Kris H. Green St. John Fisher College, [email protected] Follow this and additional works at: https://fisherpub.sjfc.edu/math_facpub Part of the Mathematics Commons How has open access to Fisher Digital Publications benefited ou?y Publication Information Green, Kris H. (2008). "What's in a Name? The Matrix as an Introduction to Mathematics." Math Horizons 16.1, 18-21. Please note that the Publication Information provides general citation information and may not be appropriate for your discipline. To receive help in creating a citation based on your discipline, please visit http://libguides.sjfc.edu/citations. This document is posted at https://fisherpub.sjfc.edu/math_facpub/12 and is brought to you for free and open access by Fisher Digital Publications at St. John Fisher College. For more information, please contact [email protected]. What's in a Name? The Matrix as an Introduction to Mathematics Abstract In lieu of an abstract, here is the article's first paragraph: In my classes on the nature of scientific thought, I have often used the movie The Matrix to illustrate the nature of evidence and how it shapes the reality we perceive (or think we perceive). As a mathematician, I usually field questions elatedr to the movie whenever the subject of linear algebra arises, since this field is the study of matrices and their properties. So it is natural to ask, why does the movie title reference a mathematical object? Disciplines Mathematics Comments Article copyright 2008 by Math Horizons.
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Eigenvalues and Eigenvectors
    Jim Lambers MAT 605 Fall Semester 2015-16 Lecture 14 and 15 Notes These notes correspond to Sections 4.4 and 4.5 in the text. Eigenvalues and Eigenvectors In order to compute the matrix exponential eAt for a given matrix A, it is helpful to know the eigenvalues and eigenvectors of A. Definitions and Properties Let A be an n × n matrix. A nonzero vector x is called an eigenvector of A if there exists a scalar λ such that Ax = λx: The scalar λ is called an eigenvalue of A, and we say that x is an eigenvector of A corresponding to λ. We see that an eigenvector of A is a vector for which matrix-vector multiplication with A is equivalent to scalar multiplication by λ. Because x is nonzero, it follows that if x is an eigenvector of A, then the matrix A − λI is singular, where λ is the corresponding eigenvalue. Therefore, λ satisfies the equation det(A − λI) = 0: The expression det(A−λI) is a polynomial of degree n in λ, and therefore is called the characteristic polynomial of A (eigenvalues are sometimes called characteristic values). It follows from the fact that the eigenvalues of A are the roots of the characteristic polynomial that A has n eigenvalues, which can repeat, and can also be complex, even if A is real. However, if A is real, any complex eigenvalues must occur in complex-conjugate pairs. The set of eigenvalues of A is called the spectrum of A, and denoted by λ(A). This terminology explains why the magnitude of the largest eigenvalues is called the spectral radius of A.
    [Show full text]
  • Algebra of Linear Transformations and Matrices Math 130 Linear Algebra
    Then the two compositions are 0 −1 1 0 0 1 BA = = 1 0 0 −1 1 0 Algebra of linear transformations and 1 0 0 −1 0 −1 AB = = matrices 0 −1 1 0 −1 0 Math 130 Linear Algebra D Joyce, Fall 2013 The products aren't the same. You can perform these on physical objects. Take We've looked at the operations of addition and a book. First rotate it 90◦ then flip it over. Start scalar multiplication on linear transformations and again but flip first then rotate 90◦. The book ends used them to define addition and scalar multipli- up in different orientations. cation on matrices. For a given basis β on V and another basis γ on W , we have an isomorphism Matrix multiplication is associative. Al- γ ' φβ : Hom(V; W ) ! Mm×n of vector spaces which though it's not commutative, it is associative. assigns to a linear transformation T : V ! W its That's because it corresponds to composition of γ standard matrix [T ]β. functions, and that's associative. Given any three We also have matrix multiplication which corre- functions f, g, and h, we'll show (f ◦ g) ◦ h = sponds to composition of linear transformations. If f ◦ (g ◦ h) by showing the two sides have the same A is the standard matrix for a transformation S, values for all x. and B is the standard matrix for a transformation T , then we defined multiplication of matrices so ((f ◦ g) ◦ h)(x) = (f ◦ g)(h(x)) = f(g(h(x))) that the product AB is be the standard matrix for S ◦ T .
    [Show full text]
  • Handout 9 More Matrix Properties; the Transpose
    Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric).
    [Show full text]
  • Determinants of Commuting-Block Matrices by Istvan Kovacs, Daniel S
    Determinants of Commuting-Block Matrices by Istvan Kovacs, Daniel S. Silver*, and Susan G. Williams* Let R beacommutative ring, and Matn(R) the ring of n × n matrices over R.We (i,j) can regard a k × k matrix M =(A ) over Matn(R)asablock matrix,amatrix that has been partitioned into k2 submatrices (blocks)overR, each of size n × n. When M is regarded in this way, we denote its determinant by |M|.Wewill use the symbol D(M) for the determinant of M viewed as a k × k matrix over Matn(R). It is important to realize that D(M)isann × n matrix. Theorem 1. Let R be acommutative ring. Assume that M is a k × k block matrix of (i,j) blocks A ∈ Matn(R) that commute pairwise. Then | | | | (1,π(1)) (2,π(2)) ··· (k,π(k)) (1) M = D(M) = (sgn π)A A A . π∈Sk Here Sk is the symmetric group on k symbols; the summation is the usual one that appears in the definition of determinant. Theorem 1 is well known in the case k =2;the proof is often left as an exercise in linear algebra texts (see [4, page 164], for example). The general result is implicit in [3], but it is not widely known. We present a short, elementary proof using mathematical induction on k.Wesketch a second proof when the ring R has no zero divisors, a proof that is based on [3] and avoids induction by using the fact that commuting matrices over an algebraically closed field can be simultaneously triangularized.
    [Show full text]
  • Math 217: Multilinearity of Determinants Professor Karen Smith (C)2015 UM Math Dept Licensed Under a Creative Commons By-NC-SA 4.0 International License
    Math 217: Multilinearity of Determinants Professor Karen Smith (c)2015 UM Math Dept licensed under a Creative Commons By-NC-SA 4.0 International License. A. Let V −!T V be a linear transformation where V has dimension n. 1. What is meant by the determinant of T ? Why is this well-defined? Solution note: The determinant of T is the determinant of the B-matrix of T , for any basis B of V . Since all B-matrices of T are similar, and similar matrices have the same determinant, this is well-defined—it doesn't depend on which basis we pick. 2. Define the rank of T . Solution note: The rank of T is the dimension of the image. 3. Explain why T is an isomorphism if and only if det T is not zero. Solution note: T is an isomorphism if and only if [T ]B is invertible (for any choice of basis B), which happens if and only if det T 6= 0. 3 4. Now let V = R and let T be rotation around the axis L (a line through the origin) by an 21 0 0 3 3 angle θ. Find a basis for R in which the matrix of ρ is 40 cosθ −sinθ5 : Use this to 0 sinθ cosθ compute the determinant of T . Is T othogonal? Solution note: Let v be any vector spanning L and let u1; u2 be an orthonormal basis ? for V = L . Rotation fixes ~v, which means the B-matrix in the basis (v; u1; u2) has 213 first column 405.
    [Show full text]
  • Geometric-Algebra Adaptive Filters Wilder B
    1 Geometric-Algebra Adaptive Filters Wilder B. Lopes∗, Member, IEEE, Cassio G. Lopesy, Senior Member, IEEE Abstract—This paper presents a new class of adaptive filters, namely Geometric-Algebra Adaptive Filters (GAAFs). They are Faces generated by formulating the underlying minimization problem (a deterministic cost function) from the perspective of Geometric Algebra (GA), a comprehensive mathematical language well- Edges suited for the description of geometric transformations. Also, (directed lines) differently from standard adaptive-filtering theory, Geometric Calculus (the extension of GA to differential calculus) allows Fig. 1. A polyhedron (3-dimensional polytope) can be completely described for applying the same derivation techniques regardless of the by the geometric multiplication of its edges (oriented lines, vectors), which type (subalgebra) of the data, i.e., real, complex numbers, generate the faces and hypersurfaces (in the case of a general n-dimensional quaternions, etc. Relying on those characteristics (among others), polytope). a deterministic quadratic cost function is posed, from which the GAAFs are devised, providing a generalization of regular adaptive filters to subalgebras of GA. From the obtained update rule, it is shown how to recover the following least-mean squares perform calculus with hypercomplex quantities, i.e., elements (LMS) adaptive filter variants: real-entries LMS, complex LMS, that generalize complex numbers for higher dimensions [2]– and quaternions LMS. Mean-square analysis and simulations in [10]. a system identification scenario are provided, showing very good agreement for different levels of measurement noise. GA-based AFs were first introduced in [11], [12], where they were successfully employed to estimate the geometric Index Terms—Adaptive filtering, geometric algebra, quater- transformation (rotation and translation) that aligns a pair of nions.
    [Show full text]
  • Determinants Math 122 Calculus III D Joyce, Fall 2012
    Determinants Math 122 Calculus III D Joyce, Fall 2012 What they are. A determinant is a value associated to a square array of numbers, that square array being called a square matrix. For example, here are determinants of a general 2 × 2 matrix and a general 3 × 3 matrix. a b = ad − bc: c d a b c d e f = aei + bfg + cdh − ceg − afh − bdi: g h i The determinant of a matrix A is usually denoted jAj or det (A). You can think of the rows of the determinant as being vectors. For the 3×3 matrix above, the vectors are u = (a; b; c), v = (d; e; f), and w = (g; h; i). Then the determinant is a value associated to n vectors in Rn. There's a general definition for n×n determinants. It's a particular signed sum of products of n entries in the matrix where each product is of one entry in each row and column. The two ways you can choose one entry in each row and column of the 2 × 2 matrix give you the two products ad and bc. There are six ways of chosing one entry in each row and column in a 3 × 3 matrix, and generally, there are n! ways in an n × n matrix. Thus, the determinant of a 4 × 4 matrix is the signed sum of 24, which is 4!, terms. In this general definition, half the terms are taken positively and half negatively. In class, we briefly saw how the signs are determined by permutations.
    [Show full text]