Linear Algebra: Graduate Level Problems and Solutions

Total Page:16

File Type:pdf, Size:1020Kb

Linear Algebra: Graduate Level Problems and Solutions Linear Algebra: Graduate Level Problems and Solutions Igor Yanovsky 1 Linear Algebra Igor Yanovsky, 2005 2 Disclaimer: This handbook is intended to assist graduate students with qualifying examination preparation. Please be aware, however, that the handbook might contain, and almost certainly contains, typos as well as incorrect or inaccurate solutions. I can not be made responsible for any inaccuracies contained in this handbook. Linear Algebra Igor Yanovsky, 2005 3 Contents 1 Basic Theory 4 1.1 Linear Maps . 4 1.2 Linear Maps as Matrices . 4 1.3 Dimension and Isomorphism . 4 1.4 Matrix Representations Redux . 6 1.5 Subspaces . 6 1.6 Linear Maps and Subspaces . 7 1.7 Dimension Formula . 7 1.8 Matrix Calculations . 7 1.9 Diagonalizability . 8 2 Inner Product Spaces 8 2.1 Inner Products . 8 2.2 Orthonormal Bases . 8 2.2.1 Gram-Schmidt procedure . 9 2.2.2 QR Factorization . 9 2.3 Orthogonal Complements and Projections . 9 3 Linear Maps on Inner Product Spaces 11 3.1 Adjoint Maps . 11 3.2 Self-Adjoint Maps . 13 3.3 Polarization and Isometries . 13 3.4 Unitary and Orthogonal Operators . 14 3.5 Spectral Theorem . 15 3.6 Normal Operators . 15 3.7 Unitary Equivalence . 16 3.8 Triangulability . 16 4 Determinants 17 4.1 Characteristic Polynomial . 17 5 Linear Operators 18 5.1 Dual Spaces . 18 5.2 Dual Maps . 22 6 Problems 23 Linear Algebra Igor Yanovsky, 2005 4 1 Basic Theory 1.1 Linear Maps Lemma. If A 2 Matmxn(F) and B 2 Matnxm(F), then tr(AB) = tr(BA): Pn Proof. Note that the (i; i) entry in AB is j=1 ®ij¯ji, while (j; j) entry in BA is Pm i=1 ¯ji®ij. Xm Xn Thus tr(AB) = ®ij¯ji; i=1 j=1 Xn Xm tr(BA) = ¯ji®ij: j=1 i=1 1.2 Linear Maps as Matrices n Example. Let Pn = f®0 + ®1t + ¢ ¢ ¢ + ®nt : ®0; ®1; : : : ; ®n 2 Fg be the space of polynomials of degree · n and D : V ! V the di®erential map n n¡1 D(®0 + ®1t + ¢ ¢ ¢ + ®nt ) = ®1 + ¢ ¢ ¢ + n®nt : If we use the basis 1; t; : : : ; tn for V then we see that D(tk) = ktk¡1 and thus the (n + 1)x(n + 1) matrix representation is computed via 2 3 0 1 0 ¢ ¢ ¢ 0 6 7 6 0 0 2 ¢ ¢ ¢ 0 7 6 7 2 n n¡1 2 n 6 .. 7 [D(1) D(t) D(t ) ¢ ¢ ¢ D(t )] = [0 1 2t ¢ ¢ ¢ nt ] = [1 t t ¢ ¢ ¢ t ] 6 0 0 0 . 0 7 6 . 7 4 . .. .. n 5 0 0 0 ¢ ¢ ¢ 0 1.3 Dimension and Isomorphism A linear map L : V ! W is isomorphism if we can ¯nd K : W ! V such that LK = IW and KL = IV . V ¡¡¡¡!L W x x ? ? IV ? ?IW V á¡¡¡K W Linear Algebra Igor Yanovsky, 2005 5 Theorem. V and W are isomorphic , there is a bijective linear map L : V ! W . Proof. ) If V and W are isomorphic we can ¯nd linear maps L : V ! W and K : W ! V so that LK = IW and KL = IV . Then for any y = IW (y) = L(K(y)) so we can let x = K(y), which means L is onto. If L(x1) = L(x2) then x1 = IV (x1) = KL(x1) = KL(x2) = IV (x2) = x2, which means L is 1 ¡ 1. ( Assume L : V ! W is linear and a bijection. Then we have an inverse map L¡1 ¡1 ¡1 which satis¯es L ± L = IW and L ± L = IV . In order for this inverse map to be allowable as K we need to check that it is linear. Select ®1; ®2 2 F and y1; y2 2 W . ¡1 Let xi = L (yi) so that L(xi) = yi. Then we have ¡1 ¡1 ¡1 L (®1y1 + ®2y2) = L (®1L(x1) + ®2L(x2)) = L (L(®1x1 + ®2x2)) ¡1 ¡1 = IV (®1x1 + ®2x2) = ®1x1 + ®2x2 = ®1L (y1) + ®2L (y2): Theorem. If Fm and Fn are isomorphic over F, then n = m. m n n m Proof. Suppose we have L : F ! F and K : F ! F such that LK = IFn and KL = IFm . L 2 Matnxm(F) and K 2 Matmxn(F). Thus n = tr(IFn ) = tr(LK) = tr(KL) = tr(IFm ) = m: De¯ne the dimension of a vector space V over F as dimF V = n if V is isomorphic to Fn. Remark. dimC C = 1, dimR C = 2, dimQ R = 1. The set of all linear maps fL : V ! W g over F is homomorphism, and is denoted by homF(V; W ). Corollary. If V and W are ¯nite dimensional vector spaces over F, then homF(V; W ) is also ¯nite dimensional and dimF homF(V; W ) = (dimF W ) ¢ (dimF V ) Proof. By choosing bases for V and W there is a natural mapping (dimF W )¢(dimF V ) homF(V; W ) ! Mat(dimF W )£(dimF V )(F) ' F This map is both 1-1 and onto as the matrix represetation uniquely determines the linear map and every matrix yields a linear map. Linear Algebra Igor Yanovsky, 2005 6 1.4 Matrix Representations Redux L : V ! W , bases x1; : : : ; xm for V and y1; : : : ; yn for W . The matrix for L interpreted as a linear map is [L]: Fm ! Fn. The basis isomorphisms de¯ned by the choices of basis for V and W : m 1 [x1 ¢ ¢ ¢ xm]: F ! V , n [y1 ¢ ¢ ¢ yn ]: F ! W . V ¡¡¡¡!L W x x ? ? [x1¢¢¢xm]? ?[y1¢¢¢yn] [L] Fm ¡¡¡¡! Fn L ± [x1 ¢ ¢ ¢ xm] = [y1 ¢ ¢ ¢ yn][L] 1.5 Subspaces A nonempty subset M ½ V is a subspace if ®; ¯ 2 F and x; y 2 M, then ®x+¯y 2 M. Also, 0 2 M. If M; N ½ V are subspaces, then we can form two new subspaces, the sum and the intersection: \ M + N = fx + y : x 2 M; y 2 Ng;M N = fx : x 2 M; x 2 Ng: T M and N have trivial intersection if M N = f0g. M and N are transversal if M + N = V . Two spaces are complementary if theyT are transversal and have trivial intersection.L M; N form a direct sum of V if M N = f0g and M + N = V . Write V = M N. Example. V = R2. M = f(x; 0) : x 2 Rg, x-axis, and N = f(0; y): y 2 Rg, y-axis. Example. V = R2. M = f(x; 0) : x 2 Rg, x-axis, and N = f(y; y): y 2 Rg, a diagonal. L Note (x; y) = (x ¡ y; 0) + (y; y), which gives V = M N. L If we have a direct sum decomposition V = M N, then we can construct the projection of V onto M along N. The map E : V ! V is de¯ned using that each z = x + y, x 2 M, y 2 N and mapping z to x. E(z) = E(x + y) = E(x) + E(y) = E(x) = x. Thus im(E) = M and ker(E) = N. De¯nition. If V is a vector space, a projection of V is a linear operator E on V such that E2 = E. 1 m [x1 ¢ ¢ ¢ xm]: F ! V means 2 3 ®1 6 . 7 [x1 ¢ ¢ ¢ xm] 4 . 5 = ®1x1 + ¢ ¢ ¢ + ®mxm ®m Linear Algebra Igor Yanovsky, 2005 7 1.6 Linear Maps and Subspaces L : V ! W is a linear map over F. The kernel or nullspace of L is ker(L) = N(L) = fx 2 V : L(x) = 0g The image or range of L is im(L) = R(L) = L(V ) = fL(x) 2 W : x 2 V g Lemma. ker(L) is a subspace of V and im (L) is a subspace of W . Proof. Assume that ®1; ®2 2 F and that x1; x2 2 ker(L), then L(®1x1 + ®2x2) = ®1L(x1) + ®2L(x2) = 0 ) ®1x1 + ®2x2 2 ker(L). Assume ®1; ®2 2 F and x1; x2 2 V , then ®1L(x1) + ®2L(x2) = L(®1x1 + ®2x2) 2 im (L). Lemma. L is 1-1 , ker(L) = f0g. Proof. ) We know that L(0¢0) = 0¢L(0) = 0, so if L is 1¡1 we have L(x) = 0 = L(0) implies that x = 0. Hence ker(L) = f0g. ( Assume that ker(L) = f0g. If L(x1) = L(x2), then linearity of L tells that L(x1 ¡ x2) = 0. Then ker(L) = f0g implies x1 ¡ x2 = 0, which shows that x1 = x2 as desired. Lemma. L : V ! W , and dim V = dim W . L is 1-1 , L is onto , dim im (L) = dim V . Proof. From the dimension formula, we have dim V = dim ker(L) + dim im(L): L is 1-1 , ker(L) = f0g , dim ker(L) = 0 , dim im (L) = dim V , dim im (L) = dim W , im (L) = W , that is, L is onto. 1.7 Dimension Formula Theorem. Let V be ¯nite dimensional and L : V ! W a linear map, all over F, then im(L) is ¯nite dimensional and dimF V = dimF ker(L) + dimF im(L) Proof. We know that dim ker(L) ·Tdim V and that it has a complement M of dimension k = dim V ¡ dim ker(L). Since M ker(L) = f0g the linear map L must be 1-1 when restricted to M. Thus LjM : M ! im(L) is an isomorphism, i.e. dim im(L) = dim M = k. 1.8 Matrix Calculations 2 Change of Basis Matrix. Given the two basis of R , ¯1 = fx1 = (1; 1); x2 = (1; 0)g and ¯2 = fy1 = (4; 3); y2 = (3; 2)g, we ¯nd the change-of-basis matrix P from ¯1 to ¯2.
Recommended publications
  • Common Course Outline MATH 257 Linear Algebra 4 Credits
    Common Course Outline MATH 257 Linear Algebra 4 Credits The Community College of Baltimore County Description MATH 257 – Linear Algebra is one of the suggested elective courses for students majoring in Mathematics, Computer Science or Engineering. Included are geometric vectors, matrices, systems of linear equations, vector spaces, linear transformations, determinants, eigenvectors and inner product spaces. 4 Credits: 5 lecture hours Prerequisite: MATH 251 with a grade of “C” or better Overall Course Objectives Upon successfully completing the course, students will be able to: 1. perform matrix operations; 2. use Gaussian Elimination, Cramer’s Rule, and the inverse of the coefficient matrix to solve systems of Linear Equations; 3. find the inverse of a matrix by Gaussian Elimination or using the adjoint matrix; 4. compute the determinant of a matrix using cofactor expansion or elementary row operations; 5. apply Gaussian Elimination to solve problems concerning Markov Chains; 6. verify that a structure is a vector space by checking the axioms; 7. verify that a subset is a subspace and that a set of vectors is a basis; 8. compute the dimensions of subspaces; 9. compute the matrix representation of a linear transformation; 10. apply notions of linear transformations to discuss rotations and reflections of two dimensional space; 11. compute eigenvalues and find their corresponding eigenvectors; 12. diagonalize a matrix using eigenvalues; 13. apply properties of vectors and dot product to prove results in geometry; 14. apply notions of vectors, dot product and matrices to construct a best fitting curve; 15. construct a solution to real world problems using problem methods individually and in groups; 16.
    [Show full text]
  • The Grassmann Manifold
    The Grassmann Manifold 1. For vector spaces V and W denote by L(V; W ) the vector space of linear maps from V to W . Thus L(Rk; Rn) may be identified with the space Rk£n of k £ n matrices. An injective linear map u : Rk ! V is called a k-frame in V . The set k n GFk;n = fu 2 L(R ; R ): rank(u) = kg of k-frames in Rn is called the Stiefel manifold. Note that the special case k = n is the general linear group: k k GLk = fa 2 L(R ; R ) : det(a) 6= 0g: The set of all k-dimensional (vector) subspaces ¸ ½ Rn is called the Grassmann n manifold of k-planes in R and denoted by GRk;n or sometimes GRk;n(R) or n GRk(R ). Let k ¼ : GFk;n ! GRk;n; ¼(u) = u(R ) denote the map which assigns to each k-frame u the subspace u(Rk) it spans. ¡1 For ¸ 2 GRk;n the fiber (preimage) ¼ (¸) consists of those k-frames which form a basis for the subspace ¸, i.e. for any u 2 ¼¡1(¸) we have ¡1 ¼ (¸) = fu ± a : a 2 GLkg: Hence we can (and will) view GRk;n as the orbit space of the group action GFk;n £ GLk ! GFk;n :(u; a) 7! u ± a: The exercises below will prove the following n£k Theorem 2. The Stiefel manifold GFk;n is an open subset of the set R of all n £ k matrices. There is a unique differentiable structure on the Grassmann manifold GRk;n such that the map ¼ is a submersion.
    [Show full text]
  • Introduction to Linear Bialgebra
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of New Mexico University of New Mexico UNM Digital Repository Mathematics and Statistics Faculty and Staff Publications Academic Department Resources 2005 INTRODUCTION TO LINEAR BIALGEBRA Florentin Smarandache University of New Mexico, [email protected] W.B. Vasantha Kandasamy K. Ilanthenral Follow this and additional works at: https://digitalrepository.unm.edu/math_fsp Part of the Algebra Commons, Analysis Commons, Discrete Mathematics and Combinatorics Commons, and the Other Mathematics Commons Recommended Citation Smarandache, Florentin; W.B. Vasantha Kandasamy; and K. Ilanthenral. "INTRODUCTION TO LINEAR BIALGEBRA." (2005). https://digitalrepository.unm.edu/math_fsp/232 This Book is brought to you for free and open access by the Academic Department Resources at UNM Digital Repository. It has been accepted for inclusion in Mathematics and Statistics Faculty and Staff Publications by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected], [email protected], [email protected]. INTRODUCTION TO LINEAR BIALGEBRA W. B. Vasantha Kandasamy Department of Mathematics Indian Institute of Technology, Madras Chennai – 600036, India e-mail: [email protected] web: http://mat.iitm.ac.in/~wbv Florentin Smarandache Department of Mathematics University of New Mexico Gallup, NM 87301, USA e-mail: [email protected] K. Ilanthenral Editor, Maths Tiger, Quarterly Journal Flat No.11, Mayura Park, 16, Kazhikundram Main Road, Tharamani, Chennai – 600 113, India e-mail: [email protected] HEXIS Phoenix, Arizona 2005 1 This book can be ordered in a paper bound reprint from: Books on Demand ProQuest Information & Learning (University of Microfilm International) 300 N.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • 2 Hilbert Spaces You Should Have Seen Some Examples Last Semester
    2 Hilbert spaces You should have seen some examples last semester. The simplest (finite-dimensional) ex- C • A complex Hilbert space H is a complete normed space over whose norm is derived from an ample is Cn with its standard inner product. It’s worth recalling from linear algebra that if V is inner product. That is, we assume that there is a sesquilinear form ( , ): H H C, linear in · · × → an n-dimensional (complex) vector space, then from any set of n linearly independent vectors we the first variable and conjugate linear in the second, such that can manufacture an orthonormal basis e1, e2,..., en using the Gram-Schmidt process. In terms of this basis we can write any v V in the form (f ,д) = (д, f ), ∈ v = a e , a = (v, e ) (f , f ) 0 f H, and (f , f ) = 0 = f = 0. i i i i ≥ ∀ ∈ ⇒ ∑ The norm and inner product are related by which can be derived by taking the inner product of the equation v = aiei with ei. We also have n ∑ (f , f ) = f 2. ∥ ∥ v 2 = a 2. ∥ ∥ | i | i=1 We will always assume that H is separable (has a countable dense subset). ∑ Standard infinite-dimensional examples are l2(N) or l2(Z), the space of square-summable As usual for a normed space, the distance on H is given byd(f ,д) = f д = (f д, f д). • • ∥ − ∥ − − sequences, and L2(Ω) where Ω is a measurable subset of Rn. The Cauchy-Schwarz and triangle inequalities, • √ (f ,д) f д , f + д f + д , | | ≤ ∥ ∥∥ ∥ ∥ ∥ ≤ ∥ ∥ ∥ ∥ 2.1 Orthogonality can be derived fairly easily from the inner product.
    [Show full text]
  • Bornologically Isomorphic Representations of Tensor Distributions
    Bornologically isomorphic representations of distributions on manifolds E. Nigsch Thursday 15th November, 2018 Abstract Distributional tensor fields can be regarded as multilinear mappings with distributional values or as (classical) tensor fields with distribu- tional coefficients. We show that the corresponding isomorphisms hold also in the bornological setting. 1 Introduction ′ ′ ′r s ′ Let D (M) := Γc(M, Vol(M)) and Ds (M) := Γc(M, Tr(M) ⊗ Vol(M)) be the strong duals of the space of compactly supported sections of the volume s bundle Vol(M) and of its tensor product with the tensor bundle Tr(M) over a manifold; these are the spaces of scalar and tensor distributions on M as defined in [?, ?]. A property of the space of tensor distributions which is fundamental in distributional geometry is given by the C∞(M)-module isomorphisms ′r ∼ s ′ ∼ r ′ Ds (M) = LC∞(M)(Tr (M), D (M)) = Ts (M) ⊗C∞(M) D (M) (1) (cf. [?, Theorem 3.1.12 and Corollary 3.1.15]) where C∞(M) is the space of smooth functions on M. In[?] a space of Colombeau-type nonlinear generalized tensor fields was constructed. This involved handling smooth functions (in the sense of convenient calculus as developed in [?]) in par- arXiv:1105.1642v1 [math.FA] 9 May 2011 ∞ r ′ ticular on the C (M)-module tensor products Ts (M) ⊗C∞(M) D (M) and Γ(E) ⊗C∞(M) Γ(F ), where Γ(E) denotes the space of smooth sections of a vector bundle E over M. In[?], however, only minor attention was paid to questions of topology on these tensor products.
    [Show full text]
  • Solution: the First Element of the Dual Basis Is the Linear Function Α
    Homework assignment 7 pp. 105 3 Exercise 2. Let B = f®1; ®2; ®3g be the basis for C defined by ®1 = (1; 0; ¡1) ®2 = (1; 1; 1) ®3 = (2; 2; 0): Find the dual basis of B. Solution: ¤ The first element of the dual basis is the linear function ®1 such ¤ ¤ ¤ that ®1(®1) = 1; ®1(®2) = 0 and ®1(®3) = 0. To describe such a function more explicitly we need to find its values on the standard basis vectors e1, e2 and e3. To do this express e1; e2; e3 through ®1; ®2; ®3 (refer to the solution of Exercise 1 pp. 54-55 from Homework 6). For each i = 1; 2; 3 you will find the numbers ai; bi; ci such that e1 = ai®1 + bi®2 + ci®3 (i.e. the coordinates of ei relative to the ¤ ¤ ¤ basis ®1; ®2; ®3). Then by linearity of ®1 we get that ®1(ei) = ai. Then ®2(ei) = bi, ¤ and ®3(ei) = ci. This is the answer. It can also be reformulated as follows. If P is the transition matrix from the standard basis e1; e2; e3 to ®1; ®2; ®3, i.e. ¡1 t (®1; ®2; ®3) = (e1; e2; e3)P , then (P ) is the transition matrix from the dual basis ¤ ¤ ¤ ¤ ¤ ¤ ¤ ¤ ¤ ¤ ¤ ¤ ¡1 t e1; e2; e3 to the dual basis ®1; ®2; a3, i.e. (®1; ®2; a3) = (e1; e2; e3)(P ) . Note that this problem is basically the change of coordinates problem: e.g. ¤ 3 the value of ®1 on the vector v 2 C is the first coordinate of v relative to the basis ®1; ®2; ®3.
    [Show full text]
  • Orthogonal Complements (Revised Version)
    Orthogonal Complements (Revised Version) Math 108A: May 19, 2010 John Douglas Moore 1 The dot product You will recall that the dot product was discussed in earlier calculus courses. If n x = (x1: : : : ; xn) and y = (y1: : : : ; yn) are elements of R , we define their dot product by x · y = x1y1 + ··· + xnyn: The dot product satisfies several key axioms: 1. it is symmetric: x · y = y · x; 2. it is bilinear: (ax + x0) · y = a(x · y) + x0 · y; 3. and it is positive-definite: x · x ≥ 0 and x · x = 0 if and only if x = 0. The dot product is an example of an inner product on the vector space V = Rn over R; inner products will be treated thoroughly in Chapter 6 of [1]. Recall that the length of an element x 2 Rn is defined by p jxj = x · x: Note that the length of an element x 2 Rn is always nonnegative. Cauchy-Schwarz Theorem. If x 6= 0 and y 6= 0, then x · y −1 ≤ ≤ 1: (1) jxjjyj Sketch of proof: If v is any element of Rn, then v · v ≥ 0. Hence (x(y · y) − y(x · y)) · (x(y · y) − y(x · y)) ≥ 0: Expanding using the axioms for dot product yields (x · x)(y · y)2 − 2(x · y)2(y · y) + (x · y)2(y · y) ≥ 0 or (x · x)(y · y)2 ≥ (x · y)2(y · y): 1 Dividing by y · y, we obtain (x · y)2 jxj2jyj2 ≥ (x · y)2 or ≤ 1; jxj2jyj2 and (1) follows by taking the square root.
    [Show full text]
  • Vectors and Dual Vectors
    Vectors By now you have a pretty good experience with \vectors". Usually, a vector is defined as a quantity that has a direction and a magnitude, such as a position vector, velocity vector, acceleration vector, etc. However, the notion of a vector has a considerably wider realm of applicability than these examples might suggest. The set of all real numbers forms a vector space, as does the set of all complex numbers. The set of functions on a set (e.g., functions of one variable, f(x)) form a vector space. Solutions of linear homogeneous equations form a vector space. We begin by giving the abstract rules for forming a space of vectors, also known as a vector space. A vector space V is a set equipped with an operation of \addition" and an additive identity. The elements of the set are called vectors, which we shall denote as ~u, ~v, ~w, etc. For now, you can think of them as position vectors in order to keep yourself sane. Addition, is an operation in which two vectors, say ~u and ~v, can be combined to make another vector, say, ~w. We denote this operation by the symbol \+": ~u + ~v = ~w: (1) Do not be fooled by this simple notation. The \addition" of vectors may be quite a different operation than ordinary arithmetic addition. For example, if we view position vectors in the x-y plane as \arrows" drawn from the origin, the addition of vectors is defined by the parallelogram rule. Clearly this rule is quite different than ordinary \addition".
    [Show full text]
  • Determinants Math 122 Calculus III D Joyce, Fall 2012
    Determinants Math 122 Calculus III D Joyce, Fall 2012 What they are. A determinant is a value associated to a square array of numbers, that square array being called a square matrix. For example, here are determinants of a general 2 × 2 matrix and a general 3 × 3 matrix. a b = ad − bc: c d a b c d e f = aei + bfg + cdh − ceg − afh − bdi: g h i The determinant of a matrix A is usually denoted jAj or det (A). You can think of the rows of the determinant as being vectors. For the 3×3 matrix above, the vectors are u = (a; b; c), v = (d; e; f), and w = (g; h; i). Then the determinant is a value associated to n vectors in Rn. There's a general definition for n×n determinants. It's a particular signed sum of products of n entries in the matrix where each product is of one entry in each row and column. The two ways you can choose one entry in each row and column of the 2 × 2 matrix give you the two products ad and bc. There are six ways of chosing one entry in each row and column in a 3 × 3 matrix, and generally, there are n! ways in an n × n matrix. Thus, the determinant of a 4 × 4 matrix is the signed sum of 24, which is 4!, terms. In this general definition, half the terms are taken positively and half negatively. In class, we briefly saw how the signs are determined by permutations.
    [Show full text]
  • Duality, Part 1: Dual Bases and Dual Maps Notation
    Duality, part 1: Dual Bases and Dual Maps Notation F denotes either R or C. V and W denote vector spaces over F. Define ': R3 ! R by '(x; y; z) = 4x − 5y + 2z. Then ' is a linear functional on R3. n n Fix (b1;:::; bn) 2 C . Define ': C ! C by '(z1;:::; zn) = b1z1 + ··· + bnzn: Then ' is a linear functional on Cn. Define ': P(R) ! R by '(p) = 3p00(5) + 7p(4). Then ' is a linear functional on P(R). R 1 Define ': P(R) ! R by '(p) = 0 p(x) dx. Then ' is a linear functional on P(R). Examples: Linear Functionals Definition: linear functional A linear functional on V is a linear map from V to F. In other words, a linear functional is an element of L(V; F). n n Fix (b1;:::; bn) 2 C . Define ': C ! C by '(z1;:::; zn) = b1z1 + ··· + bnzn: Then ' is a linear functional on Cn. Define ': P(R) ! R by '(p) = 3p00(5) + 7p(4). Then ' is a linear functional on P(R). R 1 Define ': P(R) ! R by '(p) = 0 p(x) dx. Then ' is a linear functional on P(R). Linear Functionals Definition: linear functional A linear functional on V is a linear map from V to F. In other words, a linear functional is an element of L(V; F). Examples: Define ': R3 ! R by '(x; y; z) = 4x − 5y + 2z. Then ' is a linear functional on R3. Define ': P(R) ! R by '(p) = 3p00(5) + 7p(4). Then ' is a linear functional on P(R). R 1 Define ': P(R) ! R by '(p) = 0 p(x) dx.
    [Show full text]
  • 28. Exterior Powers
    28. Exterior powers 28.1 Desiderata 28.2 Definitions, uniqueness, existence 28.3 Some elementary facts 28.4 Exterior powers Vif of maps 28.5 Exterior powers of free modules 28.6 Determinants revisited 28.7 Minors of matrices 28.8 Uniqueness in the structure theorem 28.9 Cartan's lemma 28.10 Cayley-Hamilton Theorem 28.11 Worked examples While many of the arguments here have analogues for tensor products, it is worthwhile to repeat these arguments with the relevant variations, both for practice, and to be sensitive to the differences. 1. Desiderata Again, we review missing items in our development of linear algebra. We are missing a development of determinants of matrices whose entries may be in commutative rings, rather than fields. We would like an intrinsic definition of determinants of endomorphisms, rather than one that depends upon a choice of coordinates, even if we eventually prove that the determinant is independent of the coordinates. We anticipate that Artin's axiomatization of determinants of matrices should be mirrored in much of what we do here. We want a direct and natural proof of the Cayley-Hamilton theorem. Linear algebra over fields is insufficient, since the introduction of the indeterminate x in the definition of the characteristic polynomial takes us outside the class of vector spaces over fields. We want to give a conceptual proof for the uniqueness part of the structure theorem for finitely-generated modules over principal ideal domains. Multi-linear algebra over fields is surely insufficient for this. 417 418 Exterior powers 2. Definitions, uniqueness, existence Let R be a commutative ring with 1.
    [Show full text]