Contents 3 Vector Spaces and Linear Transformations

Total Page:16

File Type:pdf, Size:1020Kb

Contents 3 Vector Spaces and Linear Transformations Linear Algebra (part 2) : Vector Spaces and Linear Transformations (by Evan Dummit, 2016, v. 2.35) Contents 3 Vector Spaces and Linear Transformations 1 3.1 Review of Vectors in Rn ........................................... 1 3.2 The Formal Denition of a Vector Space . 2 3.3 Subspaces . 5 3.3.1 The Subspace Criterion . 5 3.3.2 Additional Examples of Subspaces . 7 3.4 Span, Linear Independence, Bases, Dimension . 8 3.4.1 Linear Combinations and Span . 9 3.4.2 Linear Independence and Linear Dependence . 11 3.4.3 Bases and Dimension . 15 3.4.4 Finding Bases for Rn, Row Spaces, Column Spaces, and Nullspaces . 19 3.5 Linear Transformations . 21 3.5.1 Denition and Examples . 22 3.5.2 Kernel and Image . 24 3.5.3 Isomorphisms of Vector Spaces, Matrices Associated to Linear Transformations . 27 3 Vector Spaces and Linear Transformations In this chapter we will introduce the notion of an abstract vector space, which is, ultimately, a generalization of the ideas inherent in studying vectors in 2- or 3-dimensional space. We will study vector spaces from an axiomatic perspective, discuss the notions of span and linear independence, and ultimately explain why every vector space possesses a linearly independent spanning set called a basis. We will then discuss linear transformations, which are the most natural kind of a map from one vector space to another, and show how they are intimately related with matrices. In our discussions we will give concrete examples as often as possible, and use the general properties we have shown about vector spaces to motivate results relevant to solving systems of linear equations and solving dierential equations. 3.1 Review of Vectors in Rn • A vector, as we typically think of it, is a quantity which has both a magnitude and a direction. This is in contrast to a scalar, which carries only a magnitude. ◦ Real-valued vectors are extremely useful in just about every aspect of the physical sciences, since just about everything in Newtonian physics is a vector position, velocity, acceleration, forces, etc. There is also vector calculus namely, calculus in the context of vector elds which is typically part of a multivariable calculus course; it has many applications to physics as well. • We often think of vectors geometrically, as a directed line segment (having a starting point and an endpoint). • Algebraically, we write a vector as an ordered tuple of coordinates: we denote the n-dimensional vector from the origin to the point (a1; a2; ··· ; an) as v = ha1; a2; ··· ; ani, where the ai are real-number scalars. 1 4 ◦ Some vectors: h1; 2i, h3; 5; −1i, −π; e2; 27; 3; ; 0; 0; −1 . 3 ◦ Notation: We use angle brackets h·i rather than parentheses (·) so as to draw a visual distinction between a vector and the coordinates of a point in space. We also draw arrows above vectors (as ~v) or typeset them in boldface (as v) in order to set them apart from scalars. Boldface is hard to produce without a computer, so it is highly recommended to use the arrow notation ~v when writing by hand. ◦ Note: Vectors are a little bit dierent from directed line segments, because we don't care where a vector starts: we only care about the dierence between the starting and ending positions. Thus: the directed segment whose start is (0; 0) and end is (1; 1) and the segment starting at (1; 1) and ending at (2; 2) represent the same vector h1; 1i. • We can add vectors (provided they are of the same length!) in the obvious way, one component at a time: if v = ha1; ··· ; ani and w = hb1; ··· ; bni then v + w = ha1 + b1; ··· ; an + bni. ◦ We can justify this using our geometric idea of what a vector does: v moves us from the origin to the point (a1; ··· ; an). Then w tells us to add hb1; ··· ; bni to the coordinates of our current position, and so w moves us from (a1; ··· ; an) to (a1 +b1; ··· ; an +bn). So the net result is that the sum vector v+w moves us from the origin to (a1 + b1; ··· ; an + bn), meaning that it is just the vector ha1 + b1; ··· ; an + bni. ◦ Geometrically, we can think of vector addition using a parallelogram whose pairs of parallel sides are v and w and whose diagonal is v + w: • We can also 'scale' a vector by a scalar, one component at a time: if r is a scalar, then we have r v = hra1; ··· ; rani. ◦ Again, we can justify this by our geometric idea of what a vector does: if v moves us some amount in a 1 direction, then v should move us half as far in that direction. Analogously, 2v should move us twice 2 as far in that direction, while −v should move us exactly as far, but in the opposite direction. • Example: If v = h−1; 2; 2i and w = h3; 0; −4i then 2w = h6; 0; −8i , and v+w = h2; 2; −2i . Furthermore,v− 2w = h−7; 2; 10i . • The arithmetic of vectors in Rn satises several algebraic properties that follow more or less directly from the denition: ◦ Addition of vectors is commutative and associative. ◦ There is a zero vector (namely, the vector with all entries zero), and every vector has an additive inverse. ◦ Scalar multiplication distributes over addition of both vectors and scalars. 3.2 The Formal Denition of a Vector Space • The two operations of addition and scalar multiplication (and the various algebraic properties they satisfy) are the key properties of vectors in Rn. We would like to investigate other collections of things which possess those same properties. 2 • Denition: A (real) vector space is a collection V of vectors together with two binary operations, addition of vectors (+) and scalar multiplication of a vector by a real number (·), satisfying the following axioms: ◦ Let v, v1, v2, v3 be any vectors in V and α, α1,α2 be any (real number) scalars. ◦ Note: The statement that + and · are binary operations means that v1 + v2 and α · v are always dened; that is, they are both vectors in V . ◦ [A1] Addition is commutative: v1 + v2 = v2 + v1. ◦ [A2] Addition is associative: (v1 + v2) + v3 = v1 + (v2 + v3). ◦ [A3] There exists a zero vector 0, with v + 0 = v. ◦ [A4] Every vector v has an additive inverse −v, with v + (−v) = 0. ◦ [M1] Scalar multiplication is consistent with regular multiplication: α1 · (α2 · v) = (α1α2) · v. ◦ [M2] Addition of scalars distributes: (α1 + α2) · v = α1 · v + α2 · v. ◦ [M3] Addition of vectors distributes: α · (v1 + v2) = α · v1 + α · v2. ◦ [M4] The scalar 1 acts like the identity on vectors: 1 · v = v. • Remark: One may also consider vector spaces where the collection of scalars is something other than the real numbers: for example, there exists an equally important notion of a complex vector space, whose scalars are the complex numbers. (The axioms are the same, except we allow the scalars to be complex numbers.) ◦ We will primarily work with real vector spaces, in which the scalars are the real numbers. ◦ The most general notion of a vector space involves scalars from a eld, which is a collection of numbers which possess addition and multiplication operations which are commutative, associative, and distribu- tive, with an additive identity 0 and multiplicative identity 1, such that every element has an additive inverse and every nonzero element has a multiplicative inverse. ◦ Aside from the real and complex numbers, another example of a eld is the rational numbers (fractions). ◦ One can formulate an equally interesting theory of vector spaces over any eld. • Here are some examples of vector spaces: • Example: The vectors in Rn are a vector space, for any n > 0. (This had better be true!) ◦ For simplicity we will demonstrate all of the axioms for vectors in R2; there, the vectors are of the form hx; yi and scalar multiplication is dened as α · hx; yi = hαx; αyi. ◦ [A1]: We have hx1; y1i + hx2; y2i = hx1 + x2; y1 + y2i = hx2; y2i + hx1; y1i. ◦ [A2]: We have (hx1; y1i + hx2; y2i)+hx3; y3i = hx1 + x2 + x3; y1 + y2 + y3i = hx1; y1i+(hx2; y2i + hx3; y3i). ◦ [A3]: The zero vector is h0; 0i, and clearly hx; yi + h0; 0i = hx; yi. ◦ [A4]: The additive inverse of hx; yi is h−x; −yi, since hx; yi + h−x; −yi = h0; 0i. ◦ [M1]: We have α1 · (α2 · hx; yi) = hα1α2x; α1α2yi = (α1α2) · hx; yi. ◦ [M2]: We have (α1 + α2) · hx; yi = h(α1 + α2)x; (α1 + α2)yi = α1 · hx; yi + α2 · hx; yi. ◦ [M3]: We have α · (hx1; y1i + hx2; y2i) = hα(x1 + x2); α(y1 + y2)i = α · hx1; y1i + α · hx2; y2i. ◦ [M4]: Finally, we have 1 · hx; yi = hx; yi. • Example: The set of m × n matrices for any m and any n, forms a vector space. ◦ The various algebraic properties we know about matrix addition give [A1] and [A2] along with [M1], [M2], [M3], and [M4]. ◦ The zero vector in this vector space is the zero matrix (with all entries zero), and [A3] and [A4] follow easily. ◦ Note of course that in some cases we can also multiply matrices by other matrices. However, the requirements for being a vector space don't care that we can multiply matrices by other matrices! (All we need to be able to do is add them and multiply them by scalars.) 3 • Example: The complex numbers (the numbers of the form a + bi for real a and b, and where i2 = −1) are a vector space. ◦ The axioms all follow from the standard properties of complex numbers.
Recommended publications
  • Parametrizations of K-Nonnegative Matrices
    Parametrizations of k-Nonnegative Matrices Anna Brosowsky, Neeraja Kulkarni, Alex Mason, Joe Suk, Ewin Tang∗ October 2, 2017 Abstract Totally nonnegative (positive) matrices are matrices whose minors are all nonnegative (positive). We generalize the notion of total nonnegativity, as follows. A k-nonnegative (resp. k-positive) matrix has all minors of size k or less nonnegative (resp. positive). We give a generating set for the semigroup of k-nonnegative matrices, as well as relations for certain special cases, i.e. the k = n − 1 and k = n − 2 unitriangular cases. In the above two cases, we find that the set of k-nonnegative matrices can be partitioned into cells, analogous to the Bruhat cells of totally nonnegative matrices, based on their factorizations into generators. We will show that these cells, like the Bruhat cells, are homeomorphic to open balls, and we prove some results about the topological structure of the closure of these cells, and in fact, in the latter case, the cells form a Bruhat-like CW complex. We also give a family of minimal k-positivity tests which form sub-cluster algebras of the total positivity test cluster algebra. We describe ways to jump between these tests, and give an alternate description of some tests as double wiring diagrams. 1 Introduction A totally nonnegative (respectively totally positive) matrix is a matrix whose minors are all nonnegative (respectively positive). Total positivity and nonnegativity are well-studied phenomena and arise in areas such as planar networks, combinatorics, dynamics, statistics and probability. The study of total positivity and total nonnegativity admit many varied applications, some of which are explored in “Totally Nonnegative Matrices” by Fallat and Johnson [5].
    [Show full text]
  • Things You Need to Know About Linear Algebra Math 131 Multivariate
    the standard (x; y; z) coordinate three-space. Nota- tions for vectors. Geometric interpretation as vec- tors as displacements as well as vectors as points. Vector addition, the zero vector 0, vector subtrac- Things you need to know about tion, scalar multiplication, their properties, and linear algebra their geometric interpretation. Math 131 Multivariate Calculus We start using these concepts right away. In D Joyce, Spring 2014 chapter 2, we'll begin our study of vector-valued functions, and we'll use the coordinates, vector no- The relation between linear algebra and mul- tation, and the rest of the topics reviewed in this tivariate calculus. We'll spend the first few section. meetings reviewing linear algebra. We'll look at just about everything in chapter 1. Section 1.2. Vectors and equations in R3. Stan- Linear algebra is the study of linear trans- dard basis vectors i; j; k in R3. Parametric equa- formations (also called linear functions) from n- tions for lines. Symmetric form equations for a line dimensional space to m-dimensional space, where in R3. Parametric equations for curves x : R ! R2 m is usually equal to n, and that's often 2 or 3. in the plane, where x(t) = (x(t); y(t)). A linear transformation f : Rn ! Rm is one that We'll use standard basis vectors throughout the preserves addition and scalar multiplication, that course, beginning with chapter 2. We'll study tan- is, f(a + b) = f(a) + f(b), and f(ca) = cf(a). We'll gent lines of curves and tangent planes of surfaces generally use bold face for vectors and for vector- in that chapter and throughout the course.
    [Show full text]
  • Schaum's Outline of Linear Algebra (4Th Edition)
    SCHAUM’S SCHAUM’S outlines outlines Linear Algebra Fourth Edition Seymour Lipschutz, Ph.D. Temple University Marc Lars Lipson, Ph.D. University of Virginia Schaum’s Outline Series New York Chicago San Francisco Lisbon London Madrid Mexico City Milan New Delhi San Juan Seoul Singapore Sydney Toronto Copyright © 2009, 2001, 1991, 1968 by The McGraw-Hill Companies, Inc. All rights reserved. Except as permitted under the United States Copyright Act of 1976, no part of this publication may be reproduced or distributed in any form or by any means, or stored in a database or retrieval system, without the prior writ- ten permission of the publisher. ISBN: 978-0-07-154353-8 MHID: 0-07-154353-8 The material in this eBook also appears in the print version of this title: ISBN: 978-0-07-154352-1, MHID: 0-07-154352-X. All trademarks are trademarks of their respective owners. Rather than put a trademark symbol after every occurrence of a trademarked name, we use names in an editorial fashion only, and to the benefit of the trademark owner, with no intention of infringement of the trademark. Where such designations appear in this book, they have been printed with initial caps. McGraw-Hill eBooks are available at special quantity discounts to use as premiums and sales promotions, or for use in corporate training programs. To contact a representative please e-mail us at [email protected]. TERMS OF USE This is a copyrighted work and The McGraw-Hill Companies, Inc. (“McGraw-Hill”) and its licensors reserve all rights in and to the work.
    [Show full text]
  • 18.06 Linear Algebra, Problem Set 5 Solutions
    18.06 Problem Set 5 Solution Total: points Section 4.1. Problem 7. Every system with no solution is like the one in problem 6. There are numbers y1; : : : ; ym that multiply the m equations so they add up to 0 = 1. This is called Fredholm’s Alternative: T Exactly one of these problems has a solution: Ax = b OR A y = 0 with T y b = 1. T If b is not in the column space of A it is not orthogonal to the nullspace of A . Multiply the equations x1 − x2 = 1 and x2 − x3 = 1 and x1 − x3 = 1 by numbers y1; y2; y3 chosen so that the equations add up to 0 = 1. Solution (4 points) Let y1 = 1, y2 = 1 and y3 = −1. Then the left-hand side of the sum of the equations is (x1 − x2) + (x2 − x3) − (x1 − x3) = x1 − x2 + x2 − x3 + x3 − x1 = 0 and the right-hand side is 1 + 1 − 1 = 1: Problem 9. If AT Ax = 0 then Ax = 0. Reason: Ax is inthe nullspace of AT and also in the of A and those spaces are . Conclusion: AT A has the same nullspace as A. This key fact is repeated in the next section. Solution (4 points) Ax is in the nullspace of AT and also in the column space of A and those spaces are orthogonal. Problem 31. The command N=null(A) will produce a basis for the nullspace of A. Then the command B=null(N') will produce a basis for the of A.
    [Show full text]
  • Determinants Math 122 Calculus III D Joyce, Fall 2012
    Determinants Math 122 Calculus III D Joyce, Fall 2012 What they are. A determinant is a value associated to a square array of numbers, that square array being called a square matrix. For example, here are determinants of a general 2 × 2 matrix and a general 3 × 3 matrix. a b = ad − bc: c d a b c d e f = aei + bfg + cdh − ceg − afh − bdi: g h i The determinant of a matrix A is usually denoted jAj or det (A). You can think of the rows of the determinant as being vectors. For the 3×3 matrix above, the vectors are u = (a; b; c), v = (d; e; f), and w = (g; h; i). Then the determinant is a value associated to n vectors in Rn. There's a general definition for n×n determinants. It's a particular signed sum of products of n entries in the matrix where each product is of one entry in each row and column. The two ways you can choose one entry in each row and column of the 2 × 2 matrix give you the two products ad and bc. There are six ways of chosing one entry in each row and column in a 3 × 3 matrix, and generally, there are n! ways in an n × n matrix. Thus, the determinant of a 4 × 4 matrix is the signed sum of 24, which is 4!, terms. In this general definition, half the terms are taken positively and half negatively. In class, we briefly saw how the signs are determined by permutations.
    [Show full text]
  • Section 2.4–2.5 Partitioned Matrices and LU Factorization
    Section 2.4{2.5 Partitioned Matrices and LU Factorization Gexin Yu [email protected] College of William and Mary Gexin Yu [email protected] Section 2.4{2.5 Partitioned Matrices and LU Factorization One approach to simplify the computation is to partition a matrix into blocks. 2 3 0 −1 5 9 −2 3 Ex: A = 4 −5 2 4 0 −3 1 5. −8 −6 3 1 7 −4 This partition can also be written as the following 2 × 3 block matrix: A A A A = 11 12 13 A21 A22 A23 3 0 −1 In the block form, we have blocks A = and so on. 11 −5 2 4 partition matrices into blocks In real world problems, systems can have huge numbers of equations and un-knowns. Standard computation techniques are inefficient in such cases, so we need to develop techniques which exploit the internal structure of the matrices. In most cases, the matrices of interest have lots of zeros. Gexin Yu [email protected] Section 2.4{2.5 Partitioned Matrices and LU Factorization 2 3 0 −1 5 9 −2 3 Ex: A = 4 −5 2 4 0 −3 1 5. −8 −6 3 1 7 −4 This partition can also be written as the following 2 × 3 block matrix: A A A A = 11 12 13 A21 A22 A23 3 0 −1 In the block form, we have blocks A = and so on. 11 −5 2 4 partition matrices into blocks In real world problems, systems can have huge numbers of equations and un-knowns.
    [Show full text]
  • Span, Linear Independence and Basis Rank and Nullity
    Remarks for Exam 2 in Linear Algebra Span, linear independence and basis The span of a set of vectors is the set of all linear combinations of the vectors. A set of vectors is linearly independent if the only solution to c1v1 + ::: + ckvk = 0 is ci = 0 for all i. Given a set of vectors, you can determine if they are linearly independent by writing the vectors as the columns of the matrix A, and solving Ax = 0. If there are any non-zero solutions, then the vectors are linearly dependent. If the only solution is x = 0, then they are linearly independent. A basis for a subspace S of Rn is a set of vectors that spans S and is linearly independent. There are many bases, but every basis must have exactly k = dim(S) vectors. A spanning set in S must contain at least k vectors, and a linearly independent set in S can contain at most k vectors. A spanning set in S with exactly k vectors is a basis. A linearly independent set in S with exactly k vectors is a basis. Rank and nullity The span of the rows of matrix A is the row space of A. The span of the columns of A is the column space C(A). The row and column spaces always have the same dimension, called the rank of A. Let r = rank(A). Then r is the maximal number of linearly independent row vectors, and the maximal number of linearly independent column vectors. So if r < n then the columns are linearly dependent; if r < m then the rows are linearly dependent.
    [Show full text]
  • MATH 532: Linear Algebra Chapter 4: Vector Spaces
    MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2015 [email protected] MATH 532 1 Outline 1 Spaces and Subspaces 2 Four Fundamental Subspaces 3 Linear Independence 4 Bases and Dimension 5 More About Rank 6 Classical Least Squares 7 Kriging as best linear unbiased predictor [email protected] MATH 532 2 Spaces and Subspaces Outline 1 Spaces and Subspaces 2 Four Fundamental Subspaces 3 Linear Independence 4 Bases and Dimension 5 More About Rank 6 Classical Least Squares 7 Kriging as best linear unbiased predictor [email protected] MATH 532 3 Spaces and Subspaces Spaces and Subspaces While the discussion of vector spaces can be rather dry and abstract, they are an essential tool for describing the world we work in, and to understand many practically relevant consequences. After all, linear algebra is pretty much the workhorse of modern applied mathematics. Moreover, many concepts we discuss now for traditional “vectors” apply also to vector spaces of functions, which form the foundation of functional analysis. [email protected] MATH 532 4 Spaces and Subspaces Vector Space Definition A set V of elements (vectors) is called a vector space (or linear space) over the scalar field F if (A1) x + y 2 V for any x; y 2 V (M1) αx 2 V for every α 2 F and (closed under addition), x 2 V (closed under scalar (A2) (x + y) + z = x + (y + z) for all multiplication), x; y; z 2 V, (M2) (αβ)x = α(βx) for all αβ 2 F, (A3) x + y = y + x for all x; y 2 V, x 2 V, (A4) There exists a zero vector 0 2 V (M3) α(x + y) = αx + αy for all α 2 F, such that x + 0 = x for every x; y 2 V, x 2 V, (M4) (α + β)x = αx + βx for all (A5) For every x 2 V there is a α; β 2 F, x 2 V, negative (−x) 2 V such that (M5)1 x = x for all x 2 V.
    [Show full text]
  • Math 217: True False Practice Professor Karen Smith 1. a Square Matrix Is Invertible If and Only If Zero Is Not an Eigenvalue. Solution Note: True
    (c)2015 UM Math Dept licensed under a Creative Commons By-NC-SA 4.0 International License. Math 217: True False Practice Professor Karen Smith 1. A square matrix is invertible if and only if zero is not an eigenvalue. Solution note: True. Zero is an eigenvalue means that there is a non-zero element in the kernel. For a square matrix, being invertible is the same as having kernel zero. 2. If A and B are 2 × 2 matrices, both with eigenvalue 5, then AB also has eigenvalue 5. Solution note: False. This is silly. Let A = B = 5I2. Then the eigenvalues of AB are 25. 3. If A and B are 2 × 2 matrices, both with eigenvalue 5, then A + B also has eigenvalue 5. Solution note: False. This is silly. Let A = B = 5I2. Then the eigenvalues of A + B are 10. 4. A square matrix has determinant zero if and only if zero is an eigenvalue. Solution note: True. Both conditions are the same as the kernel being non-zero. 5. If B is the B-matrix of some linear transformation V !T V . Then for all ~v 2 V , we have B[~v]B = [T (~v)]B. Solution note: True. This is the definition of B-matrix. 21 2 33 T 6. Suppose 40 2 05 is the matrix of a transformation V ! V with respect to some basis 0 0 1 B = (f1; f2; f3). Then f1 is an eigenvector. Solution note: True. It has eigenvalue 1. The first column of the B-matrix is telling us that T (f1) = f1.
    [Show full text]
  • On the History of Some Linear Algebra Concepts:From Babylon to Pre
    The Online Journal of Science and Technology - January 2017 Volume 7, Issue 1 ON THE HISTORY OF SOME LINEAR ALGEBRA CONCEPTS: FROM BABYLON TO PRE-TECHNOLOGY Sinan AYDIN Kocaeli University, Kocaeli Vocational High school, Kocaeli – Turkey [email protected] Abstract: Linear algebra is a basic abstract mathematical course taught at universities with calculus. It first emerged from the study of determinants in 1693. As a textbook, linear algebra was first used in graduate level curricula in 1940’s at American universities. In the 2000s, science departments of universities all over the world give this lecture in their undergraduate programs. The study of systems of linear equations first introduced by the Babylonians around at 1800 BC. For solving of linear equations systems, Cardan constructed a simple rule for two linear equations with two unknowns around at 1550 AD. Lagrange used matrices in his work on the optimization problems of real functions around at 1750 AD. Also, determinant concept was used by Gauss at that times. Between 1800 and 1900, there was a rapid and important developments in the context of linear algebra. Some theorems for determinant, the concept of eigenvalues, diagonalisation of a matrix and similar matrix concept were added in linear algebra by Couchy. Vector concept, one of the main part of liner algebra, first used by Grassmann as vector product and vector operations. The term ‘matrix’ as a rectangular forms of scalars was first introduced by J. Sylvester. The configuration of linear transformations and its connection with the matrix addition, multiplication and scalar multiplication were studied first by A.
    [Show full text]
  • Math 102 -- Linear Algebra I -- Study Guide
    Math 102 Linear Algebra I Stefan Martynkiw These notes are adapted from lecture notes taught by Dr.Alan Thompson and from “Elementary Linear Algebra: 10th Edition” :Howard Anton. Picture above sourced from (http://i.imgur.com/RgmnA.gif) 1/52 Table of Contents Chapter 3 – Euclidean Vector Spaces.........................................................................................................7 3.1 – Vectors in 2-space, 3-space, and n-space......................................................................................7 Theorem 3.1.1 – Algebraic Vector Operations without components...........................................7 Theorem 3.1.2 .............................................................................................................................7 3.2 – Norm, Dot Product, and Distance................................................................................................7 Definition 1 – Norm of a Vector..................................................................................................7 Definition 2 – Distance in Rn......................................................................................................7 Dot Product.......................................................................................................................................8 Definition 3 – Dot Product...........................................................................................................8 Definition 4 – Dot Product, Component by component..............................................................8
    [Show full text]
  • Multi-Variable Calculus/Linear Algebra Scope & Sequence
    Multi-Variable Calculus/Linear Algebra Scope & Sequence Grading Period Unit Title Learning Targets Throughout the *Apply mathematics to problems in everyday life School Year *Use a problem-solving model that incorporates analyzing information, formulating a plan, determining a solution, justifying the solution and evaluating the reasonableness of the solution *Select tools to solve problems *Communicate mathematical ideas, reasoning and their implications using multiple representations *Create and use representations to organize, record and communicate mathematical ideas *Analyze mathematical relationships to connect and communicate mathematical ideas *Display, explain and justify mathematical ideas and arguments First Grading Vectors in a plane and Unit vectors, graphing and basic operations Period in space MULTIVARIABLE Dot products and cross Angles between vectors, orthogonality, projections, work CALCULUS products Lines and Planes in Equations, intersections, distance space Surfaces in Space Cylindrical surfaces, quadric surfaces, surfaces of revolution, cylindrical and spherical coordinate Vector Valued Parameterization, domain, range, limits, graphical representation, continuity, differentiation, Functions integration, smoothness, velocity, acceleration, speed, position for a projectile, tangent and normal vectors, principal unit normal, arc length and curvature. Functions of several Graphically, level curves, delta neighborhoods, interior point, boundary point, closed regions, variables limits by definition, limits from various
    [Show full text]