0.1. Linear Transformations the Following Mostly Comes from My Recitation #11 (With Minor Changes) and So Is Unsuited for the Exam

Total Page:16

File Type:pdf, Size:1020Kb

0.1. Linear Transformations the Following Mostly Comes from My Recitation #11 (With Minor Changes) and So Is Unsuited for the Exam Suggestions for midterm review #3 The repetitoria are usually not complete; I am merely bringing up the points that many people didn’t know on the recitations. 0.1. Linear transformations The following mostly comes from my recitation #11 (with minor changes) and so is unsuited for the exam. I would begin with repeating stuff, as people seem to be highly confused: 0.1.1. Repetitorium 1. A map T : V ! W between two vector spaces (say, R-vector spaces) is linear if and only if it satisfies the axioms T (0) = 0; T (u + v) = T (u) + T (v) for all u, v 2 V; T (au) = aT (u) for all u 2 V and a 2 R (where the R should be a C if the vector spaces are complex). You can leave out the first axiom (it follows from applying the second axiom to u = 0 and v = 0), but the other two axioms are essential. Examples: 0 1 x1 2 3 x1 • The map T : R ! R sending every to @ x1 − 2x2 A is lin- x2 3x2 u ear. (Indeed, for every u 2 R2 and v 2 R2, we can write u = 1 u2 v u v and v = 1 , and then we have T (u + v) = T 1 + 1 = v2 u2 v2 0 1 0 1 u1 + v1 u1 + v1 u1 + v1 T = @ (u1 + v1) − 2 (u2 + v2) A = @ u1 + v1 − 2u2 − 2v2 A, u2 + v2 3 (u2 + v2) 3u2 + 3v2 0 1 u1 u1 v1 which is the same as T (u) + T (v) = T + T = @ u1 − 2u2 A + u2 v2 3u2 0 1 0 1 v1 u1 + v1 @ v1 − 2v2 A = @ u1 + v1 − 2u2 − 2v2 A. This proves the second axiom. 3v2 3u2 + 3v2 The other axioms are just as easy to check.) 1 2 x1 • The map T : R ! R sending every to x1 + x2 − 1 is not linear. x2 (Indeed, it fails the T (0) = 0 axiom. It also fails the other two axioms, but failing one of them is enough for it to be not linear.) x • The map T : R ! R2 sending every x to is not linear. (Indeed, it x2 fails the second axiom for u = 1 and v = 1 because (1 + 1)2 6= 12 + 12.) 2. If V and W are two vector spaces, and if T : V ! W is a linear map, then the matrix representation of T with respect to a given basis (v1, v2,..., vn) of V and a given basis (w , w ,..., wm) of W is the m × n-matrix MT defined as follows: 1 2 For every j 2 f1, 2, . , ng, expand the vector T vj with respect to the basis (w1, w2,..., wm), say, as follows: T vj = a1,jw1 + a2,jw2 + ··· + am,jwm. Then, MT is the m × n-matrix whose (i, j)-th entry is ai,j. For example, if n = 2 and m = 3, then 0 1 a1,1 a1,2 MT = @ a2,1 a2,2 A , a3,1 a3,2 where T (v1) = a1,1w1 + a2,1w2 + a3,1w3; T (v2) = a1,2w1 + a2,2w2 + a3,2w3. 3. What is this matrix MT good for? First of all, it allows easily expanding T (v) in the basis (w1, w2,..., wm) of W if v is a vector in V whose expansion in 1 the basis (v1, v2,..., vn) of V is known. More importantly, once you know the matrix MT, you can use it as a proxy whenever you want to know something about T. For instance, you want to know the nullspace of T. You know how to compute the nullspace of a matrix, but not how to compute the nullspace of a linear map. So you find the nullspace of MT; it consists of vectors of the 0 1 g1 B g2 C B C T form B . C. Then, the nullspace of consists of the corresponding vectors @ . A gn g1v1 + g2v2 + ··· + gnvn. 4. Suppose that V = Rn and W = Rm. These vector spaces already have standard bases, but nothing keeps you from representing a linear map T : Rn ! 1 For instance, if v is one of the basis vectors vj, then the expansion of T vj can be simply read off from the j-th column of MT; otherwise, it is an appropriate combination of columns. 2 Rm with respect to two other bases. There is a formula to represent the map in this case: Let T : Rn ! Rm be a linear map, and let A be the m × n-matrix representation of T with respect to the standard bases (that is, the matrix such that T (v) = Av n n for every v 2 R ). Let (v1, v2,..., vn) be a basis of R , and let (w1, w2,..., wm) m be a basis of R . Let B be the matrix with columns v1, v2,..., vn, and let C be the matrix with columns (w1, w2,..., wm). Then, the matrix representation of T −1 with respect to the bases (v1, v2,..., vn) and (w1, w2,..., wm) is MT = C AB. 0.1.2. Exercises Exercise 0.1. Let f : R2 ! R3 be the linear map which sends every vector 0 1 −1 1 v 2 R3 to @ 2 0 A v. 3 1 2 Consider the following basis (v1, v2) of the vector space R : 1 1 v = , v = . 1 1 2 0 3 Consider the following basis (w1, w2, w3) of the vector space R : 0 1 1 0 0 1 0 0 1 w1 = @ 1 A , w2 = @ 1 A , w3 = @ 0 A . 1 1 1 Find the matrix M f representing f with respect to these two bases (v1, v2) and (w1, w2, w3). 1 1 [Hint: You can use the M = C−1 AB formula here, with B = , T 1 0 0 1 0 0 1 0 1 −1 1 C = @ 1 1 0 A and A = @ 2 0 A. But it is easier to do it directly, and 1 1 1 3 1 the C−1 AB formula does not help in the next exercises.] First solution. We compute the matrix representation directly, using part 2. of the repetitorium above. We have 0 1 −1 1 0 0 1 1 1 f (v ) = f = 2 0 = 2 = 0w + 2w + 2w 1 1 @ A 1 @ A 1 2 3 3 1 4 3 2 and similarly 0 1 1 f (v2) = @ 2 A = 1w1 + 1w2 + 1w3. 3 Thus, 0 0 1 1 M f = @ 2 1 A . 2 1 Second solution. We can apply part 4. of the repetitorium. Here, n = 2, 0 1 −1 1 0 1 0 0 1 1 1 m = 3, T = f , A = 2 0 , B = , and C = 1 1 0 . Thus, @ A 1 0 @ A 3 1 1 1 1 0 0 1 1 −1 M f = C AB = @ 2 1 A. 2 1 For every n 2 N, we let Pn denote the vector space of all polynomials (with real coefficients) of degree ≤ n in one variable x. This vector space has di- mension n + 1, and its simplest basis is 1, x, x2,..., xn. We call this basis the monomial basis of Pn. Exercise 0.2. Which of the following maps are linear? For every one that is, represent it as a matrix with respect to the monomial bases of its domain and its target. (a) The map Ta : P2 ! P2 given by Ta ( f ) = f (x + 1). (b) The map Tb : P2 ! P3 given by Tb ( f ) = x f (x). (c) The map Tc : P2 ! P4 given by Tc ( f ) = f (1) f (x). 2 (d) The map Td : P2 ! P4 given by Td ( f ) = f x + 1 . 1 (e) The map T : P ! P given by T ( f ) = x2 f . e 2 2 e x 0 0 1 2 Finding the expression 0w1 + 2w2 + 0w3 for @ 2 A can be done by solving a system of lin- 4 ear equations (in general, if we want to represent a vector p 2 Rn as a linear combination p = a1w1 + a2w2 + ··· + anwn of n given linearly independent vectors (w1, w2,..., wn), then 0 1 a1 B a2 C B C we can find the coefficients a1, a2,..., an by solving the equation W B . C = p, where W @ . A an denotes the n × n-matrix with columns w , w ,..., w ). But in our case, the vectors w , w , w 1 2 n 0 1 1 2 3 x1 are so simple that we can easily represent an arbitrary vector @ x2 A as their linear combi- x3 0 1 x1 nation, as follows: @ x2 A = x1w1 + (x2 − x1) w2 + (x3 − x2) w3. x3 4 0 (g) The map Tg : P3 ! P2 given by Tg ( f ) = f (x). [There is no part (f) because I want to avoid having a Tf map when f stands for a polynomial.] (h) The map Th : P3 ! P2 given by Th ( f ) = f (x) − f (x − 1). (i) The map Ti : P3 ! P2 given by Ti ( f ) = f (x) − 2 f (x − 1) + f (x − 2). Solution. First, wet us introduce a new notation to clear up some ambiguities. In the exercise, we have been writing f (x + 1) for “ f with x + 1 substituted for x”. This notation could, however, be mistaken for the product of f with x + 1. (This confusion is particularly bad when f is, say, x; normal people would read x (x + 1) as “x · (x + 1)” and not as “x with x + 1 substituted for x”.) So we change this notation to f jx+1.
Recommended publications
  • Equivalence of A-Approximate Continuity for Self-Adjoint
    Equivalence of A-Approximate Continuity for Self-Adjoint Expansive Linear Maps a,1 b, ,2 Sz. Gy. R´ev´esz , A. San Antol´ın ∗ aA. R´enyi Institute of Mathematics, Hungarian Academy of Sciences, Budapest, P.O.B. 127, 1364 Hungary bDepartamento de Matem´aticas, Universidad Aut´onoma de Madrid, 28049 Madrid, Spain Abstract Let A : Rd Rd, d 1, be an expansive linear map. The notion of A-approximate −→ ≥ continuity was recently used to give a characterization of scaling functions in a multiresolution analysis (MRA). The definition of A-approximate continuity at a point x – or, equivalently, the definition of the family of sets having x as point of A-density – depend on the expansive linear map A. The aim of the present paper is to characterize those self-adjoint expansive linear maps A , A : Rd Rd for which 1 2 → the respective concepts of Aµ-approximate continuity (µ = 1, 2) coincide. These we apply to analyze the equivalence among dilation matrices for a construction of systems of MRA. In particular, we give a full description for the equivalence class of the dyadic dilation matrix among all self-adjoint expansive maps. If the so-called “four exponentials conjecture” of algebraic number theory holds true, then a similar full description follows even for general self-adjoint expansive linear maps, too. arXiv:math/0703349v2 [math.CA] 7 Sep 2007 Key words: A-approximate continuity, multiresolution analysis, point of A-density, self-adjoint expansive linear map. 1 Supported in part in the framework of the Hungarian-Spanish Scientific and Technological Governmental Cooperation, Project # E-38/04.
    [Show full text]
  • Introduction to Linear Bialgebra
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of New Mexico University of New Mexico UNM Digital Repository Mathematics and Statistics Faculty and Staff Publications Academic Department Resources 2005 INTRODUCTION TO LINEAR BIALGEBRA Florentin Smarandache University of New Mexico, [email protected] W.B. Vasantha Kandasamy K. Ilanthenral Follow this and additional works at: https://digitalrepository.unm.edu/math_fsp Part of the Algebra Commons, Analysis Commons, Discrete Mathematics and Combinatorics Commons, and the Other Mathematics Commons Recommended Citation Smarandache, Florentin; W.B. Vasantha Kandasamy; and K. Ilanthenral. "INTRODUCTION TO LINEAR BIALGEBRA." (2005). https://digitalrepository.unm.edu/math_fsp/232 This Book is brought to you for free and open access by the Academic Department Resources at UNM Digital Repository. It has been accepted for inclusion in Mathematics and Statistics Faculty and Staff Publications by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected], [email protected], [email protected]. INTRODUCTION TO LINEAR BIALGEBRA W. B. Vasantha Kandasamy Department of Mathematics Indian Institute of Technology, Madras Chennai – 600036, India e-mail: [email protected] web: http://mat.iitm.ac.in/~wbv Florentin Smarandache Department of Mathematics University of New Mexico Gallup, NM 87301, USA e-mail: [email protected] K. Ilanthenral Editor, Maths Tiger, Quarterly Journal Flat No.11, Mayura Park, 16, Kazhikundram Main Road, Tharamani, Chennai – 600 113, India e-mail: [email protected] HEXIS Phoenix, Arizona 2005 1 This book can be ordered in a paper bound reprint from: Books on Demand ProQuest Information & Learning (University of Microfilm International) 300 N.
    [Show full text]
  • 10 7 General Vector Spaces Proposition & Definition 7.14
    10 7 General vector spaces Proposition & Definition 7.14. Monomial basis of n(R) P Letn N 0. The particular polynomialsm 0,m 1,...,m n n(R) defined by ∈ ∈P n 1 n m0(x) = 1,m 1(x) = x, . ,m n 1(x) =x − ,m n(x) =x for allx R − ∈ are called monomials. The family =(m 0,m 1,...,m n) forms a basis of n(R) B P and is called the monomial basis. Hence dim( n(R)) =n+1. P Corollary 7.15. The method of equating the coefficients Letp andq be two real polynomials with degreen N, which means ∈ n n p(x) =a nx +...+a 1x+a 0 andq(x) =b nx +...+b 1x+b 0 for some coefficientsa n, . , a1, a0, bn, . , b1, b0 R. ∈ If we have the equalityp=q, which means n n anx +...+a 1x+a 0 =b nx +...+b 1x+b 0, (7.3) for allx R, then we can concludea n =b n, . , a1 =b 1 anda 0 =b 0. ∈ VL18 ↓ Remark: Since dim( n(R)) =n+1 and we have the inclusions P 0(R) 1(R) 2(R) (R) (R), P ⊂P ⊂P ⊂···⊂P ⊂F we conclude that dim( (R)) and dim( (R)) cannot befinite natural numbers. Sym- P F bolically, we write dim( (R)) = in such a case. P ∞ 7.4 Coordinates with respect to a basis 11 7.4 Coordinates with respect to a basis 7.4.1 Basis implies coordinates Again, we deal with the caseF=R andF=C simultaneously. Therefore, letV be an F-vector space with the two operations+ and .
    [Show full text]
  • LECTURES on PURE SPINORS and MOMENT MAPS Contents 1
    LECTURES ON PURE SPINORS AND MOMENT MAPS E. MEINRENKEN Contents 1. Introduction 1 2. Volume forms on conjugacy classes 1 3. Clifford algebras and spinors 3 4. Linear Dirac geometry 8 5. The Cartan-Dirac structure 11 6. Dirac structures 13 7. Group-valued moment maps 16 References 20 1. Introduction This article is an expanded version of notes for my lectures at the summer school on `Poisson geometry in mathematics and physics' at Keio University, Yokohama, June 5{9 2006. The plan of these lectures was to give an elementary introduction to the theory of Dirac structures, with applications to Lie group valued moment maps. Special emphasis was given to the pure spinor approach to Dirac structures, developed in Alekseev-Xu [7] and Gualtieri [20]. (See [11, 12, 16] for the more standard approach.) The connection to moment maps was made in the work of Bursztyn-Crainic [10]. Parts of these lecture notes are based on a forthcoming joint paper [1] with Anton Alekseev and Henrique Bursztyn. I would like to thank the organizers of the school, Yoshi Maeda and Guiseppe Dito, for the opportunity to deliver these lectures, and for a greatly enjoyable meeting. I also thank Yvette Kosmann-Schwarzbach and the referee for a number of helpful comments. 2. Volume forms on conjugacy classes We will begin with the following FACT, which at first sight may seem quite unrelated to the theme of these lectures: FACT. Let G be a simply connected semi-simple real Lie group. Then every conjugacy class in G carries a canonical invariant volume form.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • NOTES on DIFFERENTIAL FORMS. PART 3: TENSORS 1. What Is A
    NOTES ON DIFFERENTIAL FORMS. PART 3: TENSORS 1. What is a tensor? 1 n Let V be a finite-dimensional vector space. It could be R , it could be the tangent space to a manifold at a point, or it could just be an abstract vector space. A k-tensor is a map T : V × · · · × V ! R 2 (where there are k factors of V ) that is linear in each factor. That is, for fixed ~v2; : : : ;~vk, T (~v1;~v2; : : : ;~vk−1;~vk) is a linear function of ~v1, and for fixed ~v1;~v3; : : : ;~vk, T (~v1; : : : ;~vk) is a k ∗ linear function of ~v2, and so on. The space of k-tensors on V is denoted T (V ). Examples: n • If V = R , then the inner product P (~v; ~w) = ~v · ~w is a 2-tensor. For fixed ~v it's linear in ~w, and for fixed ~w it's linear in ~v. n • If V = R , D(~v1; : : : ;~vn) = det ~v1 ··· ~vn is an n-tensor. n • If V = R , T hree(~v) = \the 3rd entry of ~v" is a 1-tensor. • A 0-tensor is just a number. It requires no inputs at all to generate an output. Note that the definition of tensor says nothing about how things behave when you rotate vectors or permute their order. The inner product P stays the same when you swap the two vectors, but the determinant D changes sign when you swap two vectors. Both are tensors. For a 1-tensor like T hree, permuting the order of entries doesn't even make sense! ~ ~ Let fb1;:::; bng be a basis for V .
    [Show full text]
  • Selecting a Monomial Basis for Sums of Squares Programming Over a Quotient Ring
    Selecting a Monomial Basis for Sums of Squares Programming over a Quotient Ring Frank Permenter1 and Pablo A. Parrilo2 Abstract— In this paper we describe a method for choosing for λi(x) and s(x), this feasibility problem is equivalent to a “good” monomial basis for a sums of squares (SOS) program an SDP. This follows because the sum of squares constraint formulated over a quotient ring. It is known that the monomial is equivalent to a semidefinite constraint on the coefficients basis need only include standard monomials with respect to a Groebner basis. We show that in many cases it is possible of s(x), and the equality constraint is equivalent to linear to use a reduced subset of standard monomials by combining equations on the coefficients of s(x) and λi(x). Groebner basis techniques with the well-known Newton poly- Naturally, the complexity of this sums of squares program tope reduction. This reduced subset of standard monomials grows with the complexity of the underlying variety, as yields a smaller semidefinite program for obtaining a certificate does the size of the corresponding SDP. It is therefore of non-negativity of a polynomial on an algebraic variety. natural to explore how algebraic structure can be exploited I. INTRODUCTION to simplify this sums of squares program. Consider the Many practical engineering problems require demonstrat- following reformulation [10], which is feasible if and only ing non-negativity of a multivariate polynomial over an if (2) is feasible: algebraic variety, i.e. over the solution set of polynomial Find s(x) equations.
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Tensor Products
    Tensor products Joel Kamnitzer April 5, 2011 1 The definition Let V, W, X be three vector spaces. A bilinear map from V × W to X is a function H : V × W → X such that H(av1 + v2, w) = aH(v1, w) + H(v2, w) for v1,v2 ∈ V, w ∈ W, a ∈ F H(v, aw1 + w2) = aH(v, w1) + H(v, w2) for v ∈ V, w1, w2 ∈ W, a ∈ F Let V and W be vector spaces. A tensor product of V and W is a vector space V ⊗ W along with a bilinear map φ : V × W → V ⊗ W , such that for every vector space X and every bilinear map H : V × W → X, there exists a unique linear map T : V ⊗ W → X such that H = T ◦ φ. In other words, giving a linear map from V ⊗ W to X is the same thing as giving a bilinear map from V × W to X. If V ⊗ W is a tensor product, then we write v ⊗ w := φ(v ⊗ w). Note that there are two pieces of data in a tensor product: a vector space V ⊗ W and a bilinear map φ : V × W → V ⊗ W . Here are the main results about tensor products summarized in one theorem. Theorem 1.1. (i) Any two tensor products of V, W are isomorphic. (ii) V, W has a tensor product. (iii) If v1,...,vn is a basis for V and w1,...,wm is a basis for W , then {vi ⊗ wj }1≤i≤n,1≤j≤m is a basis for V ⊗ W .
    [Show full text]
  • Bases for Infinite Dimensional Vector Spaces Math 513 Linear Algebra Supplement
    BASES FOR INFINITE DIMENSIONAL VECTOR SPACES MATH 513 LINEAR ALGEBRA SUPPLEMENT Professor Karen E. Smith We have proven that every finitely generated vector space has a basis. But what about vector spaces that are not finitely generated, such as the space of all continuous real valued functions on the interval [0; 1]? Does such a vector space have a basis? By definition, a basis for a vector space V is a linearly independent set which generates V . But we must be careful what we mean by linear combinations from an infinite set of vectors. The definition of a vector space gives us a rule for adding two vectors, but not for adding together infinitely many vectors. By successive additions, such as (v1 + v2) + v3, it makes sense to add any finite set of vectors, but in general, there is no way to ascribe meaning to an infinite sum of vectors in a vector space. Therefore, when we say that a vector space V is generated by or spanned by an infinite set of vectors fv1; v2;::: g, we mean that each vector v in V is a finite linear combination λi1 vi1 + ··· + λin vin of the vi's. Likewise, an infinite set of vectors fv1; v2;::: g is said to be linearly independent if the only finite linear combination of the vi's that is zero is the trivial linear combination. So a set fv1; v2; v3;:::; g is a basis for V if and only if every element of V can be be written in a unique way as a finite linear combination of elements from the set.
    [Show full text]
  • Determinants in Geometric Algebra
    Determinants in Geometric Algebra Eckhard Hitzer 16 June 2003, recovered+expanded May 2020 1 Definition Let f be a linear map1, of a real linear vector space Rn into itself, an endomor- phism n 0 n f : a 2 R ! a 2 R : (1) This map is extended by outermorphism (symbol f) to act linearly on multi- vectors f(a1 ^ a2 ::: ^ ak) = f(a1) ^ f(a2) ::: ^ f(ak); k ≤ n: (2) By definition f is grade-preserving and linear, mapping multivectors to mul- tivectors. Examples are the reflections, rotations and translations described earlier. The outermorphism of a product of two linear maps fg is the product of the outermorphisms f g f[g(a1)] ^ f[g(a2)] ::: ^ f[g(ak)] = f[g(a1) ^ g(a2) ::: ^ g(ak)] = f[g(a1 ^ a2 ::: ^ ak)]; (3) with k ≤ n. The square brackets can safely be omitted. The n{grade pseudoscalars of a geometric algebra are unique up to a scalar factor. This can be used to define the determinant2 of a linear map as det(f) = f(I)I−1 = f(I) ∗ I−1; and therefore f(I) = det(f)I: (4) For an orthonormal basis fe1; e2;:::; eng the unit pseudoscalar is I = e1e2 ::: en −1 q q n(n−1)=2 with inverse I = (−1) enen−1 ::: e1 = (−1) (−1) I, where q gives the number of basis vectors, that square to −1 (the linear space is then Rp;q). According to Grassmann n-grade vectors represent oriented volume elements of dimension n. The determinant therefore shows how these volumes change under linear maps.
    [Show full text]
  • Determinants Math 122 Calculus III D Joyce, Fall 2012
    Determinants Math 122 Calculus III D Joyce, Fall 2012 What they are. A determinant is a value associated to a square array of numbers, that square array being called a square matrix. For example, here are determinants of a general 2 × 2 matrix and a general 3 × 3 matrix. a b = ad − bc: c d a b c d e f = aei + bfg + cdh − ceg − afh − bdi: g h i The determinant of a matrix A is usually denoted jAj or det (A). You can think of the rows of the determinant as being vectors. For the 3×3 matrix above, the vectors are u = (a; b; c), v = (d; e; f), and w = (g; h; i). Then the determinant is a value associated to n vectors in Rn. There's a general definition for n×n determinants. It's a particular signed sum of products of n entries in the matrix where each product is of one entry in each row and column. The two ways you can choose one entry in each row and column of the 2 × 2 matrix give you the two products ad and bc. There are six ways of chosing one entry in each row and column in a 3 × 3 matrix, and generally, there are n! ways in an n × n matrix. Thus, the determinant of a 4 × 4 matrix is the signed sum of 24, which is 4!, terms. In this general definition, half the terms are taken positively and half negatively. In class, we briefly saw how the signs are determined by permutations.
    [Show full text]