84 Section 3.1 Chapter 3 Eigenvalues And

Total Page:16

File Type:pdf, Size:1020Kb

84 Section 3.1 Chapter 3 Eigenvalues And 84 SECTION 3.1 CHAPTER 3 EIGENVALUES AND EIGENVECTORS OF LINEAR OPERATORS 3.1 Eigenvalues and Eigenvectors § Eigenvalues and eigenvectors are associated with linear operators, not linear transformations in general. Linear operators map a vector space to itself, whereas linear transformations map one vector space to another. We shall see how important eigenvectors are in applications of linear algebra. Definition 3.1 An eigenvector v of a linear operator L on a vector space V is a nonzero vector v that satisfies the equation L(v) = λ v, (3.1) for some scalar λ. The value of λ is called the eigenvalue associated with the eigenvector v. Eigenvalue and eigenvector together make an eigenpair. Other terminology for eigenvalues are “latent values”, “proper values”, or “character- istic values”. In 2 and 3, eigenvectors corresponding to a nonzero eigenvalue are easy to visualize. They areG vectorsG that when mapped by L, do not change direction. They may change length (due to λ), and even be in the opposite direction to v (if λ < 0), but they cannot be in any other direction. In any other space, v is an eigenvector if it is mapped to a multiple of itself by L. In 2 and 3 wherein we can visualize vectors, the action of a linear operator may make eigenpairsG clear.G Three instances of this are the linear operators in Examples 2.11, 2.15, and 2.16. Example 3.1 What are eigenpairs of the linear operator in Example 2.16? Solution The linear operator reflects vectors in the xz-plane. Clearly any vector from the origin to a point in the xz-plane lies in the xz-plane, and its reflection in the xz-plane is itself. Hence, L(v) = v for any such vector. Consequently, every non-zero vector that lies in the xz-plane is an eigenvector with eigenvalue λ = 1. In addition, vectors perpendicular to the xz-plane reflect to the negative of themselves. Hence, for any such vector, L(v) = v. In other words, vectors perpendicular to the xz-plane are eigenvectors with eigenvalue λ =− 1. − • Example 3.2 What are eigenpairs of the linear operator of Example 2.15 when 0 <θ<π/2? Solution When vectors along the z-axis are rotated around the z-axis, they are un- changed. In other words, such vectors are eigenvectors with eigenvalue λ = 1. • Example 3.3 What are eigenpairs of the linear operator in Example 2.11? Solution The linear operator takes orthogonal components of vectors along the plane y = x. Clearly any vector from the origin to a point in the plane lies in the plane, and its orthogonal component is itself. Hence, L(v) = v for any such vector. Consequently, every non-zero vector that lies in the plane y = x is an eigenvector with eigenvalue λ = 1. In addition, the orthogonal component of any vector perpendicular to the plane y = x is the zero vector. Hence, for any such vector, L(v) = 0. In other words, vectors perpendicular to the plane are eigenvectors with eigenvalue λ = 0. • In general, it is impossible to spot eigenvectors of linear operators as we did in the last three examples. Matrices associated with linear operators provide an alternative. An eigenvalue λ and an eigenvector v of a linear operator L satisfy L(v) = λ v. When a matrix A can be associated with the linear operator, then L(v) = A v. In such cases, we can say that (λ, v) is an eigenpair for the linear operator if Av = λ v, (provided v = 0). (3.2) 6 SECTION 3.1 85 In this equation, v must be its column vector (or matrix) representation. If we were to equate components, we would obtain a system of nonlinear equations for λ and the components of v. Alternatively, suppose we rewrite the system in the following equivalent ways: A v = λIv (I an identity matrix) A v λIv = 0 (0 a zero matrix) − (A λI)v = 0 (3.3) − We can regard this as a system of linear, homogeneous equations in the components of v (where some of the coefficients involve the unknown λ). Such a system has nontrivial solutions if and only if the determinant of the coefficient matrix is zero, det(A λI)=0. − We have just established the following theorem. Theorem 3.1 When A is the matrix associated with a linear operator L, λ is an eigenvalue of L if, and only if, it satisfies the equation det(A λI)=0. (3.4) − Equation 3.4 is called the characteristic equation of the matrix A representing the linear operator; its left side is called the characteristic polynomial of matrix A. This certainly would be appropriate terminology were we to call λ a “characteristic” value. Char- acteristic polynomial and characteristic equation are the accepted terminology no matter what term is used for eigenvalues. Before making some general observations, we illustrate how the characteristic equation gives eigenvalues in the earlier examples of this section. The matrix associated with the linear operator of Example 2.16 is 1 0 0 A = 0 1 0 . 0− 0 1 Its eigenvalues are defined by the characteristic equation 1 λ 0 0 − 0 = det(A λI) = det 0 1 λ 0 = (1 λ)2(1 + λ). − 0− 0− 1 λ − − − Solutions are λ = 1. These were the eigenvalues found in Example 3.1. ± The matrix associated with the linear operator of Example 2.15 is cos θ sin θ 0 − A = sin θ cos θ 0 . 0 0 1 Eigenvalues are defined by the characteristic equation cos θ λ sin θ 0 − − 2 2 0 = det (A λI) = det sin θ cos θ λ 0 = (1 λ)[(cos θ λ) + sin θ]. − 0 01− λ − − − Clearly λ = 1 is an eigenvalue. In addition, we should set 0 = (cos θ λ)2 + sin2 θ = cos2 θ 2λ cos θ + λ2 + sin2 θ = λ2 2λ cos θ +1. − − − Solutions of this quadratic are 86 SECTION 3.1 2 cos θ √4 cos2 θ 4 λ = ± − = cos θ sin2 θ, 2 ± p− an impossibility. Thus, (cos θ λ)2 + sin2 θ = 0, and λ = 1 is the only eigenvalue, as was discovered in Example 3.2. − 6 The matrix associated with the linear operator of Example 3.3 is 1/2 1/2 0 A = 1/2 1/2 0 . 0 0 1 Its eigenvalues are defined by the characteristic equation 1/2 λ 1/2 0 − 2 0 = det(A λI) = det 1/2 1/2 λ 0 = λ(1 λ) . − 0 01− λ − − − Solutions are λ = 0 and λ = 1. These were the eigenvalues found in Example 3.3. Here is another example. Example 3.4 If the matrix associated with a linear operator on some 3-dimensional space is 5 3 3 − − A = 6 8 6 , −12 12 10 − − find eigenvalues of the operator. Solution Eigenvalues are given by the characteristic equation 5 λ 3 3 − − − 0 = det(A λI) = det 6 8 λ 6 . − −12 −12 10 λ − − − If we expand along the first row, 0=(5 λ)[(8 λ)( 10 λ)+72]+3[ 6( 10 λ) 72] 3[72 12(8 λ)] − − − − − − − − − − − = (5 λ)(λ2 +2λ 8)+3(6λ 12) 3(12λ 24) − − − − − = λ3 +3λ2 4 = (λ + 1)(λ 2)2. − − − − Eigenvalues are therefore λ = 1 and λ = 2. − • These examples indicate that characteristic equation 3.4 for a linear operator on an n- dimensional space is an nth degree polynomial equation in λ. Its solutions are the eigenvalues of the linear operator. We now turn our attention to finding eigenvectors once eigenvalues have been calculated. With λ known, equation 3.3 represents a system of homogeneous, linear equations in the components of the corresponding eigenvector v. Because the determinant of the system is zero, there is an infinity of solutions. In other words, eigenvectors are not unique; there is always an infinity of eigenvectors corresponding to every eigenvalue. Because the zero vector is never considered an eigenvector of a linear operator, the totality of eigenvectors corresponding to a single eigenvalue do not constitute a subspace of the vector space on which the operator operates. If we include the zero vector in the set of eigenvectors, the set becomes a subspace, called the eigenspace associated with the eigenvalue. However, realize that all nonzero vectors in the eigenspace are eigenvectors, but the zero vector is not an eigenvector. Example 3.5 Find eigenvectors corresponding to the eigenvalues λ = 1 and λ = 2 in Example 3.4. Assume that the operator associated with the matrix operate− s on 3. G SECTION 3.1 87 Solution Eigenvectors corresponding to λ = 1 satisfy − 6 3 3 v − − 1 0 = (A + I)v = 6 9 6 v2 . −12 12 9 v − − 3 The reduced row echelon form for the augmented matrix of this system is 1 0 1/4 0 − 01 1/2 0 . 00 0 0 When we convert to equations, v v v 3 =0, v + 3 =0. 1 − 4 2 2 v3/4 1 v3 Eigenvectors are therefore v = v3/2 = 2 . In other words, any multiple of 4 −v −4 3 v = 1, 2, 4 is an eigenvector corresponding to λ = 1; the eigenspace of λ = 1 is a line of eigenvectors.h − i To check our calculations, we could multip− ly, − 5 3 3 1 1 1 − − − A v = 6 8 6 2 = 2 = ( 1) 2 = λ v. −12 12 10 −4 4 − −4 − − − Eigenvectors corresponding to λ = 2 are given by 3 3 3 v − − 1 0 = (A 2I)v = 6 6 6 v2 .
Recommended publications
  • On the Bicoset of a Bivector Space
    International J.Math. Combin. Vol.4 (2009), 01-08 On the Bicoset of a Bivector Space Agboola A.A.A.† and Akinola L.S.‡ † Department of Mathematics, University of Agriculture, Abeokuta, Nigeria ‡ Department of Mathematics and computer Science, Fountain University, Osogbo, Nigeria E-mail: [email protected], [email protected] Abstract: The study of bivector spaces was first intiated by Vasantha Kandasamy in [1]. The objective of this paper is to present the concept of bicoset of a bivector space and obtain some of its elementary properties. Key Words: bigroup, bivector space, bicoset, bisum, direct bisum, inner biproduct space, biprojection. AMS(2000): 20Kxx, 20L05. §1. Introduction and Preliminaries The study of bialgebraic structures is a new development in the field of abstract algebra. Some of the bialgebraic structures already developed and studied and now available in several literature include: bigroups, bisemi-groups, biloops, bigroupoids, birings, binear-rings, bisemi- rings, biseminear-rings, bivector spaces and a host of others. Since the concept of bialgebraic structure is pivoted on the union of two non-empty subsets of a given algebraic structure for example a group, the usual problem arising from the union of two substructures of such an algebraic structure which generally do not form any algebraic structure has been resolved. With this new concept, several interesting algebraic properties could be obtained which are not present in the parent algebraic structure. In [1], Vasantha Kandasamy initiated the study of bivector spaces. Further studies on bivector spaces were presented by Vasantha Kandasamy and others in [2], [4] and [5]. In the present work however, we look at the bicoset of a bivector space and obtain some of its elementary properties.
    [Show full text]
  • Introduction to Linear Bialgebra
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of New Mexico University of New Mexico UNM Digital Repository Mathematics and Statistics Faculty and Staff Publications Academic Department Resources 2005 INTRODUCTION TO LINEAR BIALGEBRA Florentin Smarandache University of New Mexico, [email protected] W.B. Vasantha Kandasamy K. Ilanthenral Follow this and additional works at: https://digitalrepository.unm.edu/math_fsp Part of the Algebra Commons, Analysis Commons, Discrete Mathematics and Combinatorics Commons, and the Other Mathematics Commons Recommended Citation Smarandache, Florentin; W.B. Vasantha Kandasamy; and K. Ilanthenral. "INTRODUCTION TO LINEAR BIALGEBRA." (2005). https://digitalrepository.unm.edu/math_fsp/232 This Book is brought to you for free and open access by the Academic Department Resources at UNM Digital Repository. It has been accepted for inclusion in Mathematics and Statistics Faculty and Staff Publications by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected], [email protected], [email protected]. INTRODUCTION TO LINEAR BIALGEBRA W. B. Vasantha Kandasamy Department of Mathematics Indian Institute of Technology, Madras Chennai – 600036, India e-mail: [email protected] web: http://mat.iitm.ac.in/~wbv Florentin Smarandache Department of Mathematics University of New Mexico Gallup, NM 87301, USA e-mail: [email protected] K. Ilanthenral Editor, Maths Tiger, Quarterly Journal Flat No.11, Mayura Park, 16, Kazhikundram Main Road, Tharamani, Chennai – 600 113, India e-mail: [email protected] HEXIS Phoenix, Arizona 2005 1 This book can be ordered in a paper bound reprint from: Books on Demand ProQuest Information & Learning (University of Microfilm International) 300 N.
    [Show full text]
  • Bases for Infinite Dimensional Vector Spaces Math 513 Linear Algebra Supplement
    BASES FOR INFINITE DIMENSIONAL VECTOR SPACES MATH 513 LINEAR ALGEBRA SUPPLEMENT Professor Karen E. Smith We have proven that every finitely generated vector space has a basis. But what about vector spaces that are not finitely generated, such as the space of all continuous real valued functions on the interval [0; 1]? Does such a vector space have a basis? By definition, a basis for a vector space V is a linearly independent set which generates V . But we must be careful what we mean by linear combinations from an infinite set of vectors. The definition of a vector space gives us a rule for adding two vectors, but not for adding together infinitely many vectors. By successive additions, such as (v1 + v2) + v3, it makes sense to add any finite set of vectors, but in general, there is no way to ascribe meaning to an infinite sum of vectors in a vector space. Therefore, when we say that a vector space V is generated by or spanned by an infinite set of vectors fv1; v2;::: g, we mean that each vector v in V is a finite linear combination λi1 vi1 + ··· + λin vin of the vi's. Likewise, an infinite set of vectors fv1; v2;::: g is said to be linearly independent if the only finite linear combination of the vi's that is zero is the trivial linear combination. So a set fv1; v2; v3;:::; g is a basis for V if and only if every element of V can be be written in a unique way as a finite linear combination of elements from the set.
    [Show full text]
  • 18.102 Introduction to Functional Analysis Spring 2009
    MIT OpenCourseWare http://ocw.mit.edu 18.102 Introduction to Functional Analysis Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 108 LECTURE NOTES FOR 18.102, SPRING 2009 Lecture 19. Thursday, April 16 I am heading towards the spectral theory of self-adjoint compact operators. This is rather similar to the spectral theory of self-adjoint matrices and has many useful applications. There is a very effective spectral theory of general bounded but self- adjoint operators but I do not expect to have time to do this. There is also a pretty satisfactory spectral theory of non-selfadjoint compact operators, which it is more likely I will get to. There is no satisfactory spectral theory for general non-compact and non-self-adjoint operators as you can easily see from examples (such as the shift operator). In some sense compact operators are ‘small’ and rather like finite rank operators. If you accept this, then you will want to say that an operator such as (19.1) Id −K; K 2 K(H) is ‘big’. We are quite interested in this operator because of spectral theory. To say that λ 2 C is an eigenvalue of K is to say that there is a non-trivial solution of (19.2) Ku − λu = 0 where non-trivial means other than than the solution u = 0 which always exists. If λ =6 0 we can divide by λ and we are looking for solutions of −1 (19.3) (Id −λ K)u = 0 −1 which is just (19.1) for another compact operator, namely λ K: What are properties of Id −K which migh show it to be ‘big? Here are three: Proposition 26.
    [Show full text]
  • Curl, Divergence and Laplacian
    Curl, Divergence and Laplacian What to know: 1. The definition of curl and it two properties, that is, theorem 1, and be able to predict qualitatively how the curl of a vector field behaves from a picture. 2. The definition of divergence and it two properties, that is, if div F~ 6= 0 then F~ can't be written as the curl of another field, and be able to tell a vector field of clearly nonzero,positive or negative divergence from the picture. 3. Know the definition of the Laplace operator 4. Know what kind of objects those operator take as input and what they give as output. The curl operator Let's look at two plots of vector fields: Figure 1: The vector field Figure 2: The vector field h−y; x; 0i: h1; 1; 0i We can observe that the second one looks like it is rotating around the z axis. We'd like to be able to predict this kind of behavior without having to look at a picture. We also promised to find a criterion that checks whether a vector field is conservative in R3. Both of those goals are accomplished using a tool called the curl operator, even though neither of those two properties is exactly obvious from the definition we'll give. Definition 1. Let F~ = hP; Q; Ri be a vector field in R3, where P , Q and R are continuously differentiable. We define the curl operator: @R @Q @P @R @Q @P curl F~ = − ~i + − ~j + − ~k: (1) @y @z @z @x @x @y Remarks: 1.
    [Show full text]
  • Numerical Operator Calculus in Higher Dimensions
    Numerical operator calculus in higher dimensions Gregory Beylkin* and Martin J. Mohlenkamp Applied Mathematics, University of Colorado, Boulder, CO 80309 Communicated by Ronald R. Coifman, Yale University, New Haven, CT, May 31, 2002 (received for review July 31, 2001) When an algorithm in dimension one is extended to dimension d, equation using only one-dimensional operations and thus avoid- in nearly every case its computational cost is taken to the power d. ing the exponential dependence on d. However, if the best This fundamental difficulty is the single greatest impediment to approximate solution of the form (Eq. 1) is not good enough, solving many important problems and has been dubbed the curse there is no way to improve the accuracy. of dimensionality. For numerical analysis in dimension d,we The natural extension of Eq. 1 is the form propose to use a representation for vectors and matrices that generalizes separation of variables while allowing controlled ac- r ͑ ͒ ϭ ͸ ␾l ͑ ͒ ␾l ͑ ͒ curacy. Basic linear algebra operations can be performed in this f x1,...,xd sl 1 x1 ··· d xd . [2] representation using one-dimensional operations, thus bypassing lϭ1 the exponential scaling with respect to the dimension. Although not all operators and algorithms may be compatible with this The key quantity in Eq. 2 is r, which we call the separation rank. representation, we believe that many of the most important ones By increasing r, the approximate solution can be made as are. We prove that the multiparticle Schro¨dinger operator, as well accurate as desired.
    [Show full text]
  • 23. Kernel, Rank, Range
    23. Kernel, Rank, Range We now study linear transformations in more detail. First, we establish some important vocabulary. The range of a linear transformation f : V ! W is the set of vectors the linear transformation maps to. This set is also often called the image of f, written ran(f) = Im(f) = L(V ) = fL(v)jv 2 V g ⊂ W: The domain of a linear transformation is often called the pre-image of f. We can also talk about the pre-image of any subset of vectors U 2 W : L−1(U) = fv 2 V jL(v) 2 Ug ⊂ V: A linear transformation f is one-to-one if for any x 6= y 2 V , f(x) 6= f(y). In other words, different vector in V always map to different vectors in W . One-to-one transformations are also known as injective transformations. Notice that injectivity is a condition on the pre-image of f. A linear transformation f is onto if for every w 2 W , there exists an x 2 V such that f(x) = w. In other words, every vector in W is the image of some vector in V . An onto transformation is also known as an surjective transformation. Notice that surjectivity is a condition on the image of f. 1 Suppose L : V ! W is not injective. Then we can find v1 6= v2 such that Lv1 = Lv2. Then v1 − v2 6= 0, but L(v1 − v2) = 0: Definition Let L : V ! W be a linear transformation. The set of all vectors v such that Lv = 0W is called the kernel of L: ker L = fv 2 V jLv = 0g: 1 The notions of one-to-one and onto can be generalized to arbitrary functions on sets.
    [Show full text]
  • Review a Basis of a Vector Space 1
    Review • Vectors v1 , , v p are linearly dependent if x1 v1 + x2 v2 + + x pv p = 0, and not all the coefficients are zero. • The columns of A are linearly independent each column of A contains a pivot. 1 1 − 1 • Are the vectors 1 , 2 , 1 independent? 1 3 3 1 1 − 1 1 1 − 1 1 1 − 1 1 2 1 0 1 2 0 1 2 1 3 3 0 2 4 0 0 0 So: no, they are dependent! (Coeff’s x3 = 1 , x2 = − 2, x1 = 3) • Any set of 11 vectors in R10 is linearly dependent. A basis of a vector space Definition 1. A set of vectors { v1 , , v p } in V is a basis of V if • V = span{ v1 , , v p} , and • the vectors v1 , , v p are linearly independent. In other words, { v1 , , vp } in V is a basis of V if and only if every vector w in V can be uniquely expressed as w = c1 v1 + + cpvp. 1 0 0 Example 2. Let e = 0 , e = 1 , e = 0 . 1 2 3 0 0 1 3 Show that { e 1 , e 2 , e 3} is a basis of R . It is called the standard basis. Solution. 3 • Clearly, span{ e 1 , e 2 , e 3} = R . • { e 1 , e 2 , e 3} are independent, because 1 0 0 0 1 0 0 0 1 has a pivot in each column. Definition 3. V is said to have dimension p if it has a basis consisting of p vectors. Armin Straub 1 [email protected] This definition makes sense because if V has a basis of p vectors, then every basis of V has p vectors.
    [Show full text]
  • Inner Product Spaces
    CHAPTER 6 Woman teaching geometry, from a fourteenth-century edition of Euclid’s geometry book. Inner Product Spaces In making the definition of a vector space, we generalized the linear structure (addition and scalar multiplication) of R2 and R3. We ignored other important features, such as the notions of length and angle. These ideas are embedded in the concept we now investigate, inner products. Our standing assumptions are as follows: 6.1 Notation F, V F denotes R or C. V denotes a vector space over F. LEARNING OBJECTIVES FOR THIS CHAPTER Cauchy–Schwarz Inequality Gram–Schmidt Procedure linear functionals on inner product spaces calculating minimum distance to a subspace Linear Algebra Done Right, third edition, by Sheldon Axler 164 CHAPTER 6 Inner Product Spaces 6.A Inner Products and Norms Inner Products To motivate the concept of inner prod- 2 3 x1 , x 2 uct, think of vectors in R and R as x arrows with initial point at the origin. x R2 R3 H L The length of a vector in or is called the norm of x, denoted x . 2 k k Thus for x .x1; x2/ R , we have The length of this vector x is p D2 2 2 x x1 x2 . p 2 2 x1 x2 . k k D C 3 C Similarly, if x .x1; x2; x3/ R , p 2D 2 2 2 then x x1 x2 x3 . k k D C C Even though we cannot draw pictures in higher dimensions, the gener- n n alization to R is obvious: we define the norm of x .x1; : : : ; xn/ R D 2 by p 2 2 x x1 xn : k k D C C The norm is not linear on Rn.
    [Show full text]
  • October 7, 2015 1 Adjoint of a Linear Transformation
    Mathematical Toolkit Autumn 2015 Lecture 4: October 7, 2015 Lecturer: Madhur Tulsiani The following is an extremely useful inequality. Prove it for a general inner product space (not neccessarily finite dimensional). Exercise 0.1 (Cauchy-Schwarz-Bunyakowsky inequality) Let u; v be any two vectors in an inner product space V . Then jhu; vij2 ≤ hu; ui · hv; vi Let V be a finite dimensional inner product space and let fw1; : : : ; wng be an orthonormal basis for P V . Then for any v 2 V , there exist c1; : : : ; cn 2 F such that v = i ci · wi. The coefficients ci are often called Fourier coefficients. Using the orthonormality and the properties of the inner product, we get ci = hv; wii. This can be used to prove the following Proposition 0.2 (Parseval's identity) Let V be a finite dimensional inner product space and let fw1; : : : ; wng be an orthonormal basis for V . Then, for any u; v 2 V n X hu; vi = hu; wii · hv; wii : i=1 1 Adjoint of a linear transformation Definition 1.1 Let V; W be inner product spaces over the same field F and let ' : V ! W be a linear transformation. A transformation '∗ : W ! V is called an adjoint of ' if h'(v); wi = hv; '∗(w)i 8v 2 V; w 2 W: n Pn Example 1.2 Let V = W = C with the inner product hu; vi = i=1 ui · vi. Let ' : V ! V be represented by the matrix A. Then '∗ is represented by the matrix AT . Exercise 1.3 Let 'left : Fib ! Fib be the left shift operator as before, and let hf; gi for f; g 2 Fib P1 f(n)g(n) ∗ be defined as hf; gi = n=0 Cn for C > 4.
    [Show full text]
  • Matrices, Vectors, Determinants, and Linear Algebra - Tadao ODA
    MATHEMATICS: CONCEPTS, AND FOUNDATIONS – Vol. I - Matrices, Vectors, Determinants, and Linear Algebra - Tadao ODA MATRICES, VECTORS, DETERMINANTS, AND LINEAR ALGEBRA Tadao ODA Tohoku University, Japan Keywords: matrix, determinant, linear equation, Cramer’s rule, eigenvalue, Jordan canonical form, symmetric matrix, vector space, linear map Contents 1. Matrices, Vectors and their Basic Operations 1.1. Matrices 1.2. Vectors 1.3. Addition and Scalar Multiplication of Matrices 1.4. Multiplication of Matrices 2. Determinants 2.1. Square Matrices 2.2. Determinants 2.3. Cofactors and the Inverse Matrix 3. Systems of Linear Equations 3.1. Linear Equations 3.2. Cramer’s Rule 3.3. Eigenvalues of a Complex Square Matrix 3.4. Jordan Canonical Form 4. Symmetric Matrices and Quadratic Forms 4.1. Real Symmetric Matrices and Orthogonal Matrices 4.2. Hermitian Symmetric Matrices and Unitary Matrices 5. Vector Spaces and Linear Algebra 5.1. Vector spaces 5.2. Subspaces 5.3. Direct Sum of Vector Spaces 5.4. Linear Maps 5.5. Change of Bases 5.6. Properties of Linear Maps 5.7. A SystemUNESCO of Linear Equations Revisited – EOLSS 5.8. Quotient Vector Spaces 5.9. Dual Spaces 5.10. Tensor ProductSAMPLE of Vector Spaces CHAPTERS 5.11. Symmetric Product of a Vector Space 5.12. Exterior Product of a Vector Space Glossary Bibliography Biographical Sketch Summary A down-to-earth introduction of matrices and their basic operations will be followed by ©Encyclopedia of Life Support Systems (EOLSS) MATHEMATICS: CONCEPTS, AND FOUNDATIONS – Vol. I - Matrices, Vectors, Determinants, and Linear Algebra - Tadao ODA basic results on determinants, systems of linear equations, eigenvalues, real symmetric matrices and complex Hermitian symmetric matrices.
    [Show full text]
  • Vector Spaces and Determinants
    Lecture 30: Vector spaces and determinants. 1. Some preliminaries and the free module on 0 generators Exercise 30.1. Let M be a left R-module. Show that r0 =0 , and r( x)= rx. M M − − Proof. By homework, an R-action on M is the same thing as a ring ho- momorphism R End(M). In particular, every r R determines an abelian group homomorphism.→ Hence scaling by r preserves∈ the additive identity of M,andadditiveinverses. If you prefer a more computational proof, you can observe: r0M + r0M = r(0M +0M )=r0M . So by cancellation for abelian groups, we can subtract r0M from both sides to obtain r0M =0M . So r( x)+rx = r( x + x)=r0 =0 − − M M which shows that r( x)istheadditiveinversetorx. − n Remark 30.2. We know what R⊕ is for n 1. But what about n =0? Well, the proposition from last time tells≥ us that we should look for an 0 R-module R⊕ such that there is a bijection 0 Hom (R⊕ ,M) = Map ( ,M). R ∼ Sets ∅ But there is one and only one function from the empty set to any set; so we 0 must look for a module R⊕ which has one and only one module homomor- phism to any M.Theonlysuchmoduleisthezeromodule—i.e.,thetrivial abelian group with the module action r0=0. 43 44 LECTURE 30: VECTOR SPACES AND DETERMINANTS. 2. Review of last time; dimension Last time we studied finitely generated modules over a field F .Weproved Theorem 30.3. Let V be a vector space over F —i.e., a module over F .If y1,...,ym is a linearly independent set, and x1,...,xn is a spanning set, then m n.
    [Show full text]