Numerical Linear Algebra Chap. 1: Basic Concepts from Linear Algebra

Total Page:16

File Type:pdf, Size:1020Kb

Numerical Linear Algebra Chap. 1: Basic Concepts from Linear Algebra Numerical Linear Algebra Chap. 1: Basic Concepts from Linear Algebra Heinrich Voss [email protected] Hamburg University of Technology Institute for Numerical Simulation TUHH Heinrich Voss Chapter 1 2005 1 / 60 Basic Ideas from Linear Algebra Vectors The vector space Rn is defined by n T R := {(x1,..., xn) : xj ∈ R, j = 1,..., n}, x1 y1 x1 + y1 x1 αx1 x2 y2 x2 + y2 x2 αx2 + = , α = . . . . . . . . . . xn yn xn + yn xn αxn and Cn correspondingly. A subset X ⊂ Rn is a subspace of Rn if it is closed with respect to addition and multiplication by scalars, i.e. x, y ∈ X ⇒ x + y ∈ X, n α ∈ R, x ∈ R ⇒ αx ∈ X. TUHH Heinrich Voss Chapter 1 2005 2 / 60 Basic Ideas from Linear Algebra Subspaces A set of vectors {a1,..., am} ⊂ Cn is linearly independent if m X j αj a = 0 ⇒ αj = 0, j = 1,..., m. j=1 Otherwise, a nontrivial combination of the aj is zero, and {a1,..., am} is said to be linearly dependent. Given vectors a1,..., am the set of all linear combinations of these vectors is a subspace referred to as the span of a1,..., am: m 1 m X j span{a ,..., a } = αj a : αj ∈ C . j=1 TUHH Heinrich Voss Chapter 1 2005 3 / 60 Basic Ideas from Linear Algebra Subspaces ct. If {a1,..., am} is linearly independent and b ∈ span{a1,..., am}, then b has a unique representation as linear combination of the aj . n If S1,..., Sk are subspace of C then their sum k X j j S := S1 + ··· + Sk := a : a ∈ Sj , j = 1,..., k j=1 is also a subspace of Cn. S is said to be the direct sum if each x ∈ S has a 1 k j unique representattion x = a + ··· + a with a ∈ Sj . In this case we write S = S1 ⊕ · · · ⊕ Sk . The intersection of the subspaces S1,..., Sk is also a subspace, S = S1 ∩ · · · ∩ Sk . TUHH Heinrich Voss Chapter 1 2005 4 / 60 Basic Ideas from Linear Algebra Dimension The subset {ai1 ,..., aik } is a maximal linearly independent subset of {a1,..., am} if it is linearly independent and is not contained properly in any linearly independent subset of {a1,..., am}. If {ai1 ,..., aik } is a maximal linearly independent subset then span{ai1 ,..., aik } = span{a1,..., am}. If S ⊂ Cn is a subspace it is always possible to find a maximal linearly independent subset {a1,..., ak }. Then S = span{a1,..., am}, and {a1,..., am} is called a basis of S. All bases for a subspace S have the same number of elements. This number is the dimension of S, and it is denoted by dim(S). TUHH Heinrich Voss Chapter 1 2005 5 / 60 Basic Ideas from Linear Algebra Linear map A map A : Cn → Cm is called linear, if n n A(x+y) = Ax+Ay for every x, y ∈ C and A(λx) = λAx for every x ∈ C , λ ∈ C. n For j = 1,..., n let ej ∈ C be the j-th canonical unit vector having a 1 in its n j-th component and zeros elsewhere. Then for x = (xj )j=1,...,n ∈ C n n n n X X X X Ax = A xj ej = A(xj ej ) = xj Aej =: xj aj j=1 j=1 j=1 j=1 Hence, the images aj := Aej of the canonical basis vectors characterize the linear map A. We therefore identify A with the m × n matrix a11 a12 ... a1n a21 a22 ... a2n A = . . .. . am1 a2m ... amn TUHH Heinrich Voss Chapter 1 2005 6 / 60 Basic Ideas from Linear Algebra Matrix-vector product m×n n For A = (ajk ) ∈ C and x = (xk ) ∈ C we have n X m Ax = ajk xk =: b ∈ C . j=1,...,m k=1 The vector b is called matrix-vector product of A and x. For every x ∈ Cn the matrix-vector product b = Ax is a linear combination of the columns aj of the matrix A. TUHH Heinrich Voss Chapter 1 2005 7 / 60 Basic Ideas from Linear Algebra Matrix-matrix product Let A : Cn → Cm and B : Cm → Cp. Then the composition n p B ◦ A : C → C , (B ◦ A)x = B(Ax) is linear as well, and n m n n m X X X X X BAx = B(Ax) = B ajk xk = bij ajk xk = bij ajk xk . k=1 j=1 k=1 k=1 j=1 Hence, the composit map of B and A is represented by the matrix-matrix product C := BA ∈ Cp×n with elements m X cik = bij ajk , i = 1,..., p, k = 1,..., n. j=1 Notice that the matrix-matrix product of B and A is only defined if the number of columns of B equals the number of rows of A. TUHH Heinrich Voss Chapter 1 2005 8 / 60 Basic Ideas from Linear Algebra Range of a matrix The range of a matrix A, written range(A), is the set of vectors that can be expressed as Ax for some x. The formula b = Ax leads naturally to the following characterization of range(A). Theorem 1 range(A) is the space spanned by the columns of A. Proof Pn Ax = j=1 aj xj is a linear combination of the columns aj of A. Conversely, any vector y in the space spanned by the columns of A can be Pn written as a linear combination of the columns, y = j=1 aj xj . Forming a vector x out of the coefficients xj , we have y = Ax, and thus y is in the range of A. In view of Theorem 1, the range of a matrix A is also called the column space of A. TUHH Heinrich Voss Chapter 1 2005 9 / 60 Basic Ideas from Linear Algebra Nullspace of A The nullspace of A ∈ Cm×n, written null(A), is the set of vectors x that satisfy Ax = 0, where 0 is the 0-vector in Cn. The entries of each vector x ∈ null(A) give the coefficients of an expansion of zero as a linear combination of columns of A: 0 = xl al + x2a2 + ··· + xnan. The column rank of a matrix is the dimension of its column space. Similarly, the row rank of a matrix is the dimension of the space spanned by its rows. Row rank always equals column rank (among other proofs, this is a corollary of the singular value decomposition, discussed later), so we refer to this number simply as the rank of a matrix. TUHH Heinrich Voss Chapter 1 2005 10 / 60 Basic Ideas from Linear Algebra Full rank An m × n matrix of full rank is one that has the maximal possible rank (the lesser of m and n). This means that a matrix of full rank with m ≥ n must have n linearly independent columns. Such a matrix can also be characterized by the property that the map it defines is one-to-one. Theorem 2 A matrix A ∈ Cm×n with m ≥ n has full rank if and only if it maps no two distinct vectors to the same vector. Proof If A is of full rank, its columns are linearly independent, so they form a basis for range(A). This means that every b ∈ range(A) has a unique linear expansion in terms of the columns of A, and therefore, every b ∈ range(A) has a unique linear expansion in terms of the columns of A, and thus, every b ∈ range(A) has a unique x such that b = Ax. Conversely, if A is not of full rank, its columns aj are dependent, and there is a Pn nontrivial linear combination such that j=1 cj aj = 0. The nonzero vector c formed from the coefficients cj satisfies Ac = 0. But then A maps distinct vectors to the same vector since, for any x it holds that Ax = A(x + c). TUHH Heinrich Voss Chapter 1 2005 11 / 60 Basic Ideas from Linear Algebra Inverse matrix A nonsingular or invertible matrix is a square matrix of full rank. Note that the m columns of a nonsingular m × m matrix A form a basis for the whole space Cm. Therefore, we can uniquely express any vector as a linear combination of them. In particular, the canonical unit vector ej , can be expanded: m X ej = zij aj . i=1 m×m Let Z ∈ C be the matrix with entries zij , and let zj denote the jth column of Z. Then it holds ej = Azj , and putting these vectos together AZ = (e1,..., em) =: I where I ist the m × m identity. Z is the inverse of A, and is denoted by Z =: A−1. TUHH Heinrich Voss Chapter 1 2005 12 / 60 Basic Ideas from Linear Algebra Gaussian elimination The simplest way to solve a linear system (by hand or on a computer) is Gaussian elimination. It transforms a linear system to an equivalent one with upper-triangular system matrix by applying simple linear transformations. Let A ∈ Cm×n be given. The idea is to transform A into an upper-triangular matrix by introducing zeros below the diagonal, first in column 1, then in column 2 , etc. This is done by subtracting suitable multiples of each row from the subsequent ones. This eleimination process is equivalent to multiplying A by a sequence of lower triangular matrices Lj on the left: Ln−1Ln−2 ··· L1A = U. TUHH Heinrich Voss Chapter 1 2005 13 / 60 Basic Ideas from Linear Algebra LU factorization −1 −1 −1 Setting L := L1 L2 ··· Ln−1 gives A = LU. Thus we obtain an LU factorization of A A = LU, where U is upper-triangular, and L is (as a product of lower-triangular matrices) lower-triangular.
Recommended publications
  • Relativistic Dynamics
    Chapter 4 Relativistic dynamics We have seen in the previous lectures that our relativity postulates suggest that the most efficient (lazy but smart) approach to relativistic physics is in terms of 4-vectors, and that velocities never exceed c in magnitude. In this chapter we will see how this 4-vector approach works for dynamics, i.e., for the interplay between motion and forces. A particle subject to forces will undergo non-inertial motion. According to Newton, there is a simple (3-vector) relation between force and acceleration, f~ = m~a; (4.0.1) where acceleration is the second time derivative of position, d~v d2~x ~a = = : (4.0.2) dt dt2 There is just one problem with these relations | they are wrong! Newtonian dynamics is a good approximation when velocities are very small compared to c, but outside of this regime the relation (4.0.1) is simply incorrect. In particular, these relations are inconsistent with our relativity postu- lates. To see this, it is sufficient to note that Newton's equations (4.0.1) and (4.0.2) predict that a particle subject to a constant force (and initially at rest) will acquire a velocity which can become arbitrarily large, Z t ~ d~v 0 f ~v(t) = 0 dt = t ! 1 as t ! 1 . (4.0.3) 0 dt m This flatly contradicts the prediction of special relativity (and causality) that no signal can propagate faster than c. Our task is to understand how to formulate the dynamics of non-inertial particles in a manner which is consistent with our relativity postulates (and then verify that it matches observation, including in the non-relativistic regime).
    [Show full text]
  • More on Vectors Math 122 Calculus III D Joyce, Fall 2012
    More on Vectors Math 122 Calculus III D Joyce, Fall 2012 Unit vectors. A unit vector is a vector whose length is 1. If a unit vector u in the plane R2 is placed in standard position with its tail at the origin, then it's head will land on the unit circle x2 + y2 = 1. Every point on the unit circle (x; y) is of the form (cos θ; sin θ) where θ is the angle measured from the positive x-axis in the counterclockwise direction. u=(x;y)=(cos θ; sin θ) 7 '$θ q &% Thus, every unit vector in the plane is of the form u = (cos θ; sin θ). We can interpret unit vectors as being directions, and we can use them in place of angles since they carry the same information as an angle. In three dimensions, we also use unit vectors and they will still signify directions. Unit 3 vectors in R correspond to points on the sphere because if u = (u1; u2; u3) is a unit vector, 2 2 2 3 then u1 + u2 + u3 = 1. Each unit vector in R carries more information than just one angle since, if you want to name a point on a sphere, you need to give two angles, longitude and latitude. Now that we have unit vectors, we can treat every vector v as a length and a direction. The length of v is kvk, of course. And its direction is the unit vector u in the same direction which can be found by v u = : kvk The vector v can be reconstituted from its length and direction by multiplying v = kvk u.
    [Show full text]
  • Lecture 3.Pdf
    ENGR-1100 Introduction to Engineering Analysis Lecture 3 POSITION VECTORS & FORCE VECTORS Today’s Objectives: Students will be able to : a) Represent a position vector in Cartesian coordinate form, from given geometry. In-Class Activities: • Applications / b) Represent a force vector directed along Relevance a line. • Write Position Vectors • Write a Force Vector along a line 1 DOT PRODUCT Today’s Objective: Students will be able to use the vector dot product to: a) determine an angle between In-Class Activities: two vectors, and, •Applications / Relevance b) determine the projection of a vector • Dot product - Definition along a specified line. • Angle Determination • Determining the Projection APPLICATIONS This ship’s mooring line, connected to the bow, can be represented as a Cartesian vector. What are the forces in the mooring line and how do we find their directions? Why would we want to know these things? 2 APPLICATIONS (continued) This awning is held up by three chains. What are the forces in the chains and how do we find their directions? Why would we want to know these things? POSITION VECTOR A position vector is defined as a fixed vector that locates a point in space relative to another point. Consider two points, A and B, in 3-D space. Let their coordinates be (XA, YA, ZA) and (XB, YB, ZB), respectively. 3 POSITION VECTOR The position vector directed from A to B, rAB , is defined as rAB = {( XB –XA ) i + ( YB –YA ) j + ( ZB –ZA ) k }m Please note that B is the ending point and A is the starting point.
    [Show full text]
  • Linear Algebra Handbook
    CS419 Linear Algebra January 7, 2021 1 What do we need to know? By the end of this booklet, you should know all the linear algebra you need for CS419. More specifically, you'll understand: • Vector spaces (the important ones, at least) • Dirac (bra-ket) notation and why it's quite nice to use • Linear combinations, linearly independent sets, spanning sets and basis sets • Matrices and linear transformations (they're the same thing) • Changing basis • Inner products and norms • Unitary operations • Tensor products • Why we care about linear algebra 2 Vector Spaces In quantum computing, the `vectors' we'll be working with are going to be made up of complex numbers1. A vector space, V , over C is a set of vectors with the vital property2 that αu + βv 2 V for all α; β 2 C and u; v 2 V . Intuitively, this means we can add together and scale up vectors in V, and we know the result is still in V. Our vectors are going to be lists of n complex numbers, v 2 Cn, and Cn will be our most important vector space. Note we can just as easily define vector spaces over R, the set of real numbers. Over the course of this module, we'll see the reasons3 we use C, but for all this linear algebra, we can stick with R as everyone is happier with real numbers. Rest assured for the entire module, every time you see something like \Consider a vector space V ", this vector space will be Rn or Cn for some n 2 N.
    [Show full text]
  • Concept of a Dyad and Dyadic: Consider Two Vectors a and B Dyad: It Consists of a Pair of Vectors a B for Two Vectors a a N D B
    1/11/2010 CHAPTER 1 Introductory Concepts • Elements of Vector Analysis • Newton’s Laws • Units • The basis of Newtonian Mechanics • D’Alembert’s Principle 1 Science of Mechanics: It is concerned with the motion of material bodies. • Bodies have different scales: Microscropic, macroscopic and astronomic scales. In mechanics - mostly macroscopic bodies are considered. • Speed of motion - serves as another important variable - small and high (approaching speed of light). 2 1 1/11/2010 • In Newtonian mechanics - study motion of bodies much bigger than particles at atomic scale, and moving at relative motions (speeds) much smaller than the speed of light. • Two general approaches: – Vectorial dynamics: uses Newton’s laws to write the equations of motion of a system, motion is described in physical coordinates and their derivatives; – Analytical dynamics: uses energy like quantities to define the equations of motion, uses the generalized coordinates to describe motion. 3 1.1 Vector Analysis: • Scalars, vectors, tensors: – Scalar: It is a quantity expressible by a single real number. Examples include: mass, time, temperature, energy, etc. – Vector: It is a quantity which needs both direction and magnitude for complete specification. – Actually (mathematically), it must also have certain transformation properties. 4 2 1/11/2010 These properties are: vector magnitude remains unchanged under rotation of axes. ex: force, moment of a force, velocity, acceleration, etc. – geometrically, vectors are shown or depicted as directed line segments of proper magnitude and direction. 5 e (unit vector) A A = A e – if we use a coordinate system, we define a basis set (iˆ , ˆj , k ˆ ): we can write A = Axi + Ay j + Azk Z or, we can also use the A three components and Y define X T {A}={Ax,Ay,Az} 6 3 1/11/2010 – The three components Ax , Ay , Az can be used as 3-dimensional vector elements to specify the vector.
    [Show full text]
  • Two Worked out Examples of Rotations Using Quaternions
    TWO WORKED OUT EXAMPLES OF ROTATIONS USING QUATERNIONS This note is an attachment to the article \Rotations and Quaternions" which in turn is a companion to the video of the talk by the same title. Example 1. Determine the image of the point (1; −1; 2) under the rotation by an angle of 60◦ about an axis in the yz-plane that is inclined at an angle of 60◦ to the positive y-axis. p ◦ ◦ 1 3 Solution: The unit vector u in the direction of the axis of rotation is cos 60 j + sin 60 k = 2 j + 2 k. The quaternion (or vector) corresponding to the point p = (1; −1; 2) is of course p = i − j + 2k. To find −1 θ θ the image of p under the rotation, we calculate qpq where q is the quaternion cos 2 + sin 2 u and θ the angle of rotation (60◦ in this case). The resulting quaternion|if we did the calculation right|would have no constant term and therefore we can interpret it as a vector. That vector gives us the answer. p p p p p We have q = 3 + 1 u = 3 + 1 j + 3 k = 1 (2 3 + j + 3k). Since q is by construction a unit quaternion, 2 2 2 4 p4 4 p −1 1 its inverse is its conjugate: q = 4 (2 3 − j − 3k). Now, computing qp in the routine way, we get 1 p p p p qp = ((1 − 2 3) + (2 + 3 3)i − 3j + (4 3 − 1)k) 4 and then another long but routine computation gives 1 p p p qpq−1 = ((10 + 4 3)i + (1 + 2 3)j + (14 − 3 3)k) 8 The point corresponding to the vector on the right hand side in the above equation is the image of (1; −1; 2) under the given rotation.
    [Show full text]
  • Chapter 3 Vectors
    Chapter 3 Vectors 3.1 Vector Analysis ....................................................................................................... 1 3.1.1 Introduction to Vectors ................................................................................... 1 3.1.2 Properties of Vectors ....................................................................................... 1 3.2 Cartesian Coordinate System ................................................................................ 5 3.2.1 Cartesian Coordinates ..................................................................................... 6 3.3 Application of Vectors ............................................................................................ 8 Example 3.1 Vector Addition ................................................................................. 12 Example 3.2 Sinking Sailboat ................................................................................ 13 Example 3.3 Vector Description of a Point on a Line .......................................... 14 Example 3.4 Rotated Coordinate Systems ............................................................ 15 Example 3.5 Vector Description in Rotated Coordinate Systems ...................... 15 Example 3.6 Vector Addition ................................................................................. 17 Chapter 3 Vectors Philosophy is written in this grand book, the universe which stands continually open to our gaze. But the book cannot be understood unless one first learns to comprehend the language and
    [Show full text]
  • 1 Vectors & Tensors
    1 Vectors & Tensors The mathematical modeling of the physical world requires knowledge of quite a few different mathematics subjects, such as Calculus, Differential Equations and Linear Algebra. These topics are usually encountered in fundamental mathematics courses. However, in a more thorough and in-depth treatment of mechanics, it is essential to describe the physical world using the concept of the tensor, and so we begin this book with a comprehensive chapter on the tensor. The chapter is divided into three parts. The first part covers vectors (§1.1-1.7). The second part is concerned with second, and higher-order, tensors (§1.8-1.15). The second part covers much of the same ground as done in the first part, mainly generalizing the vector concepts and expressions to tensors. The final part (§1.16-1.19) (not required in the vast majority of applications) is concerned with generalizing the earlier work to curvilinear coordinate systems. The first part comprises basic vector algebra, such as the dot product and the cross product; the mathematics of how the components of a vector transform between different coordinate systems; the symbolic, index and matrix notations for vectors; the differentiation of vectors, including the gradient, the divergence and the curl; the integration of vectors, including line, double, surface and volume integrals, and the integral theorems. The second part comprises the definition of the tensor (and a re-definition of the vector); dyads and dyadics; the manipulation of tensors; properties of tensors, such as the trace, transpose, norm, determinant and principal values; special tensors, such as the spherical, identity and orthogonal tensors; the transformation of tensor components between different coordinate systems; the calculus of tensors, including the gradient of vectors and higher order tensors and the divergence of higher order tensors and special fourth order tensors.
    [Show full text]
  • Section 13.1: Displacement Vectors
    Section 13.1: Displacement Vectors A displacement vector is a geometric object which encodes both a displacement and a direction. The displacement vector from one point to another is an arrow with its tail at the first point and its tip at the second. The magnitude (or length) of the displacement vector is the distance between the points and is represented by the length of the arrow. The direction of the displacement vector is the direction of the arrow. Note: Displacement vectors which point in the same direction and have the same magnitude are considered to be the same, even if they do not coincide. Notation and Terminology: There are mainly two types of quantities we concern ourselves with: vectors and scalars. As you have seen, a vector has both magnitude and direction. A scalar, on the other hand, is a quantity specified only by a number, and has no direction. Throughout the course, we will denote vectors with an arrow. For example, ~v is a vector. A scalar will be denoted by simple italics. At times, we will use the notation PQ~ to denote the displacement vector from point P to point Q. Addition and Subtraction of Vectors: The addition of two vectors can be easily conceptualized. Suppose that you startp at a specific point and then move 25 feet to the east, then 25 feet north. Your displacement is then 25 2 feet in a direction of 45◦ with respect to the horizontal. The sum, ~v + ~w, of two vectors ~v~w is the combined displacement resulting from first applying ~v and then ~w.
    [Show full text]
  • A Quick Study on Quaternions
    A Quick Study on Quaternions George K. Francis Department of Mathematics University of Illinois [email protected] http://www.math.uiuc.edu/ gfrancis/ Motivation from Algebra (there are other motivations). In algebra, to solve mx = y for x we merely divide x = y/m. And if asked, what is this m, we again divide: m = y/x.1 For Hamilton, x and y were forces applied to two different points in space, think of steering a bicycle. In computer graphics, it might be two different places for the camera in space. For the two divisions to make sense in this context, we need to give a meaning to two inverses. (I have used m for “move”.) The first one, x = y/m = (/m)y, is a no-brainer: /m is the inverse operation, if there is one, for the operation of applying m to x. The second, m = y/x, is deeper: what is the reciprocal of a (bound) vector? Solution by generalizing Complex Numbers. For 2d-physics (or graphics) the answer resides in the complex numbers which were invented for different purposes long before Hamilton. Complex addition (and scalar multiplication) embodied the “vector” qualities (e.g. translation, as in the parallelogram rule) and complex multiplication took care of rotations. Hamilton found that if he went to 4D, his generalized complex numbers, the quaternions, had (almost) the same properties. The “almost” included the unexpected wrinkle that multiplication was no longer commutative, ab! = ba 1We use no special mathematical typography here to encourage readers to adopt a sim- ple, ascii keyboard style of mathematical notation.
    [Show full text]
  • 23. Eigenvalues and Eigenvectors
    23. Eigenvalues and Eigenvectors 11/17/20 Eigenvalues and eigenvectors have a variety of uses. They allow us to solve linear difference and differential equations. For many non-linear equations, they inform us about the long-run behavior of the system. They are also useful for defining functions of matrices. 23.1 Eigenvalues We start with eigenvalues. Eigenvalues and Spectrum. Let A be an m m matrix. An eigenvalue (characteristic value, proper value) of A is a number λ so that× A λI is singular. The spectrum of A, σ(A)= {eigenvalues of A}.− Sometimes it’s possible to find eigenvalues by inspection of the matrix. ◮ Example 23.1.1: Some Eigenvalues. Suppose 1 0 2 1 A = , B = . 0 2 1 2 Here it is pretty obvious that subtracting either I or 2I from A yields a singular matrix. As for matrix B, notice that subtracting I leaves us with two identical columns (and rows), so 1 is an eigenvalue. Less obvious is the fact that subtracting 3I leaves us with linearly independent columns (and rows), so 3 is also an eigenvalue. We’ll see in a moment that 2 2 matrices have at most two eigenvalues, so we have determined the spectrum of each:× σ(A)= {1, 2} and σ(B)= {1, 3}. ◭ 2 MATH METHODS 23.2 Finding Eigenvalues: 2 2 × We have several ways to determine whether a matrix is singular. One method is to check the determinant. It is zero if and only the matrix is singular. That means we can find the eigenvalues by solving the equation det(A λI)=0.
    [Show full text]
  • Chapter 2: Linear Algebra User's Manual
    Preprint typeset in JHEP style - HYPER VERSION Chapter 2: Linear Algebra User's Manual Gregory W. Moore Abstract: An overview of some of the finer points of linear algebra usually omitted in physics courses. May 3, 2021 -TOC- Contents 1. Introduction 5 2. Basic Definitions Of Algebraic Structures: Rings, Fields, Modules, Vec- tor Spaces, And Algebras 6 2.1 Rings 6 2.2 Fields 7 2.2.1 Finite Fields 8 2.3 Modules 8 2.4 Vector Spaces 9 2.5 Algebras 10 3. Linear Transformations 14 4. Basis And Dimension 16 4.1 Linear Independence 16 4.2 Free Modules 16 4.3 Vector Spaces 17 4.4 Linear Operators And Matrices 20 4.5 Determinant And Trace 23 5. New Vector Spaces from Old Ones 24 5.1 Direct sum 24 5.2 Quotient Space 28 5.3 Tensor Product 30 5.4 Dual Space 34 6. Tensor spaces 38 6.1 Totally Symmetric And Antisymmetric Tensors 39 6.2 Algebraic structures associated with tensors 44 6.2.1 An Approach To Noncommutative Geometry 47 7. Kernel, Image, and Cokernel 47 7.1 The index of a linear operator 50 8. A Taste of Homological Algebra 51 8.1 The Euler-Poincar´eprinciple 54 8.2 Chain maps and chain homotopies 55 8.3 Exact sequences of complexes 56 8.4 Left- and right-exactness 56 { 1 { 9. Relations Between Real, Complex, And Quaternionic Vector Spaces 59 9.1 Complex structure on a real vector space 59 9.2 Real Structure On A Complex Vector Space 64 9.2.1 Complex Conjugate Of A Complex Vector Space 66 9.2.2 Complexification 67 9.3 The Quaternions 69 9.4 Quaternionic Structure On A Real Vector Space 79 9.5 Quaternionic Structure On Complex Vector Space 79 9.5.1 Complex Structure On Quaternionic Vector Space 81 9.5.2 Summary 81 9.6 Spaces Of Real, Complex, Quaternionic Structures 81 10.
    [Show full text]