6. Linear Transformations

Total Page:16

File Type:pdf, Size:1020Kb

6. Linear Transformations 6.6. LinearLinear TransformationsTransformations E-mail: [email protected] http://web.yonsei.ac.kr/hgjung 6.1.6.1. MatricesMatrices asas TransformationsTransformations A Review of Functions domain codomain range x y preimage image http://en.wikipedia.org/wiki/Codomain E-mail: [email protected] http://web.yonsei.ac.kr/hgjung 6.1.6.1. MatricesMatrices asas TransformationsTransformations A Review of Functions A function whose input and outputs are vectors is called a transformation, and it is standard to denote transformations by capital letters such as F, T, or L. w=T(x) “T maps x into w” E-mail: [email protected] http://web.yonsei.ac.kr/hgjung 6.1.6.1. MatricesMatrices asas TransformationsTransformations A Review of Functions If T is a transformation whose domain is Rn and whose range is in Rm, then we will write (read, “T maps Rn into Rm”). You can think of a transformation T as mapping points into points or vectors into vectors. If T: RnRn, then we refer to the transformation T as an operator on Rn to emphasize that it maps Rn back into Rn. E-mail: [email protected] http://web.yonsei.ac.kr/hgjung 6.1.6.1. MatricesMatrices asas TransformationsTransformations Matrix Transformations Matrix transformation If A is an m×n matrix, and if x is a column vector in Rn, then the product Ax is a vector in Rm, so multiplying x by A creates a transformation that maps vectors in Rn into vectors in Rm. We call this transformation multiplication by A or the transformation A and denote it by TA to emphasize the matrix A. and or equivalently, nn In the special case where A is square, say n×n, we have TR A : R , n and we call TA a matrix operator on R . E-mail: [email protected] http://web.yonsei.ac.kr/hgjung 6.1.6.1. MatricesMatrices asas TransformationsTransformations Linear Transformations The operational interpretation of linearity 1. Homogeneity: Changing the input by a multiplicative factor changes the output by the same factor; that is, 2. Additivity: Adding two inputs adds the corresponding outputs; that is, E-mail: [email protected] http://web.yonsei.ac.kr/hgjung 6.1.6.1. MatricesMatrices asas TransformationsTransformations Linear Transformations n If v1, v2, …, vk are vectors in R and c1, c2, …, ck are any scalars, then Engineers and physicists sometimes call this the superposition principle. E-mail: [email protected] http://web.yonsei.ac.kr/hgjung 6.1.6.1. MatricesMatrices asas TransformationsTransformations Linear Transformations Example 7 From Theorem 3.1.5, If A is an m×n matrix, u and v are column vectors in Rn, and c is a scalar, then A(cu)=c(Au) and A(u+v)=Au+Av. n m Thus, the matrix transformation TA:R R is linear since E-mail: [email protected] http://web.yonsei.ac.kr/hgjung 6.1.6.1. MatricesMatrices asas TransformationsTransformations Some Properties of Linear Transformations E-mail: [email protected] http://web.yonsei.ac.kr/hgjung 6.1.6.1. MatricesMatrices asas TransformationsTransformations All Linear Transformations from Rn to Rm Are Matrix Transformations The matrix A in this theorem is called the standard matrix for T, and we say that T is the transformation corresponding to A, or that T is the transformation represented by A, or sometimes simply that T is the transformation A. E-mail: [email protected] http://web.yonsei.ac.kr/hgjung 6.1.6.1. MatricesMatrices asas TransformationsTransformations All Linear Transformations from Rn to Rm Are Matrix Transformations E-mail: [email protected] http://web.yonsei.ac.kr/hgjung 6.1.6.1. MatricesMatrices asas TransformationsTransformations All Linear Transformations from Rn to Rm Are Matrix Transformations When it is desirable to emphasize the relationship between T and its standard matrix, we will denote A by [T]; that is, we will write With this notation, the relation ship in (13) becomes (14) (13) E-mail: [email protected] http://web.yonsei.ac.kr/hgjung 6.1.6.1. MatricesMatrices asas TransformationsTransformations All Linear Transformations from Rn to Rm Are Matrix Transformations REMARK Theorem 6.1.4 shows that a linear transformation T:RnRm is completely determined by its values at the standard unit vectors in the sense that once the images of the standard unit vectors are known, the standard matrix [T] can be constructed and then used to compute images of all other vectors using (14) Example 11 Show that the transformation T:R3R2 defined by the formula is linear and find its standard matrix. E-mail: [email protected] http://web.yonsei.ac.kr/hgjung 6.1.6.1. MatricesMatrices asas TransformationsTransformations Rotations About The Origin Let θ be a fixed angle, and consider the operator T that rotates each vector x in R2 about the origin through the angle θ. T is linear. E-mail: [email protected] http://web.yonsei.ac.kr/hgjung 6.1.6.1. MatricesMatrices asas TransformationsTransformations Rotations About The Origin We will denote the standard matrix for the rotation about the origin through an angle θ by Rθ. E-mail: [email protected] http://web.yonsei.ac.kr/hgjung 6.1.6.1. MatricesMatrices asas TransformationsTransformations Reflections About Lines Through The Origin Let us consider the operator T:R2R2 that reflects each vector x about a line through the origin that makes an angle θ with the positive x-axis. E-mail: [email protected] http://web.yonsei.ac.kr/hgjung 6.1.6.1. MatricesMatrices asas TransformationsTransformations Reflections About Lines Through The Origin E-mail: [email protected] http://web.yonsei.ac.kr/hgjung 6.1.6.1. MatricesMatrices asas TransformationsTransformations Reflections About Lines Through The Origin E-mail: [email protected] http://web.yonsei.ac.kr/hgjung 6.1.6.1. MatricesMatrices asas TransformationsTransformations Orthogonal Projections onto Lines Through The Origin Consider the operator T:R2R2 that projects each vector x in R2 onto a line through the origin by dropping a perpendicular to that line. E-mail: [email protected] http://web.yonsei.ac.kr/hgjung 6.1.6.1. MatricesMatrices asas TransformationsTransformations Orthogonal Projections onto Lines Through The Origin The standard matrix for an orthogonal projection onto a general line through the origin can be obtained using Theorem 6.1.4. Consider a line through the origin that makes an angle θ with the positive x-axis, and denote the standard matrix for the orthogonal projection by Pθ. Solving for Pθx yeilds so part (b) of Theorem 3.4.4 implies that E-mail: [email protected] http://web.yonsei.ac.kr/hgjung 6.1.6.1. MatricesMatrices asas TransformationsTransformations Orthogonal Projections onto Lines Through The Origin Example 14 Find the orthogonal projectin of the vector x=(1,1) on the line through the origin that makes an angle of π/12(=15º) with the x-axis. 111 3 1 33 22 4 11.184 P /12 x 111 3 10.321 33 422 4 E-mail: [email protected] http://web.yonsei.ac.kr/hgjung 6.1.6.1. MatricesMatrices asas TransformationsTransformations Orthogonal Projections onto Lines Through The Origin E-mail: [email protected] http://web.yonsei.ac.kr/hgjung 6.1.6.1. MatricesMatrices asas TransformationsTransformations Transformations of The Unit Square 2 The unit square in R is the square that has e1 and e2 as adjacent side; its vertices are (0,0), (1,0), (1,1), and (0,1). It is often possible to gain some insight into the geometric behavior of a linear operator on R2 by graphing the images of these vertices. E-mail: [email protected] http://web.yonsei.ac.kr/hgjung 6.2.6.2. GeometryGeometry ofof LinearLinear OperatorsOperators Norm-Preserving Linear Operators Length preserving, angle preserving A linear operator T:RnRn with the length-preserving property ||T(x)||=||x|| is called an orthogonal operator or a linear isometry (from the Greek isometros, meaning “equal measure”). E-mail: [email protected] http://web.yonsei.ac.kr/hgjung 6.2.6.2. GeometryGeometry ofof LinearLinear OperatorsOperators Norm-Preserving Linear Operators (a)(b) Suppose that T is length preserving, and let x and y be any two vectors in Rn. (4) E-mail: [email protected] http://web.yonsei.ac.kr/hgjung 6.2.6.2. GeometryGeometry ofof LinearLinear OperatorsOperators Norm-Preserving Linear Operators (b)(a) Conversely, suppose that T is dot product preserving, and let x be any vector in Rn. Since It follows that E-mail: [email protected] http://web.yonsei.ac.kr/hgjung 6.2.6.2. GeometryGeometry ofof LinearLinear OperatorsOperators Orthogonal Operators Preserve Angles And Orthogonality Recall from the remark following Theorem 1.2.12 that the angle between two nonzero vectors x and y in Rn is given by the formula Thus, if T:RnRn is an orthogonal operator, the fact that T is length preserving and dot product preserving implies that which implies that an orthogonal operator preserves angles. E-mail: [email protected] http://web.yonsei.ac.kr/hgjung 6.2.6.2. GeometryGeometry ofof LinearLinear OperatorsOperators Orthogonal Matrices Our next goal is to explore the relationship between the orthogonality of an operator and properties of its standard matrix. Suppose that A is the standard matrix for an orthogonal linear operator T:RnRn. Since T(x)=Ax for all x in Rn, and since ||T(x)||=||x||, it follows that for all x in Rn. Axx22 AAxxxx AT AI A1 AT T AAxxxx T xxxxTTAA T E-mail: [email protected] http://web.yonsei.ac.kr/hgjung 6.2.6.2.
Recommended publications
  • Projective Geometry: a Short Introduction
    Projective Geometry: A Short Introduction Lecture Notes Edmond Boyer Master MOSIG Introduction to Projective Geometry Contents 1 Introduction 2 1.1 Objective . .2 1.2 Historical Background . .3 1.3 Bibliography . .4 2 Projective Spaces 5 2.1 Definitions . .5 2.2 Properties . .8 2.3 The hyperplane at infinity . 12 3 The projective line 13 3.1 Introduction . 13 3.2 Projective transformation of P1 ................... 14 3.3 The cross-ratio . 14 4 The projective plane 17 4.1 Points and lines . 17 4.2 Line at infinity . 18 4.3 Homographies . 19 4.4 Conics . 20 4.5 Affine transformations . 22 4.6 Euclidean transformations . 22 4.7 Particular transformations . 24 4.8 Transformation hierarchy . 25 Grenoble Universities 1 Master MOSIG Introduction to Projective Geometry Chapter 1 Introduction 1.1 Objective The objective of this course is to give basic notions and intuitions on projective geometry. The interest of projective geometry arises in several visual comput- ing domains, in particular computer vision modelling and computer graphics. It provides a mathematical formalism to describe the geometry of cameras and the associated transformations, hence enabling the design of computational ap- proaches that manipulates 2D projections of 3D objects. In that respect, a fundamental aspect is the fact that objects at infinity can be represented and manipulated with projective geometry and this in contrast to the Euclidean geometry. This allows perspective deformations to be represented as projective transformations. Figure 1.1: Example of perspective deformation or 2D projective transforma- tion. Another argument is that Euclidean geometry is sometimes difficult to use in algorithms, with particular cases arising from non-generic situations (e.g.
    [Show full text]
  • The Invertible Matrix Theorem
    The Invertible Matrix Theorem Ryan C. Daileda Trinity University Linear Algebra Daileda The Invertible Matrix Theorem Introduction It is important to recognize when a square matrix is invertible. We can now characterize invertibility in terms of every one of the concepts we have now encountered. We will continue to develop criteria for invertibility, adding them to our list as we go. The invertibility of a matrix is also related to the invertibility of linear transformations, which we discuss below. Daileda The Invertible Matrix Theorem Theorem 1 (The Invertible Matrix Theorem) For a square (n × n) matrix A, TFAE: a. A is invertible. b. A has a pivot in each row/column. RREF c. A −−−→ I. d. The equation Ax = 0 only has the solution x = 0. e. The columns of A are linearly independent. f. Null A = {0}. g. A has a left inverse (BA = In for some B). h. The transformation x 7→ Ax is one to one. i. The equation Ax = b has a (unique) solution for any b. j. Col A = Rn. k. A has a right inverse (AC = In for some C). l. The transformation x 7→ Ax is onto. m. AT is invertible. Daileda The Invertible Matrix Theorem Inverse Transforms Definition A linear transformation T : Rn → Rn (also called an endomorphism of Rn) is called invertible iff it is both one-to-one and onto. If [T ] is the standard matrix for T , then we know T is given by x 7→ [T ]x. The Invertible Matrix Theorem tells us that this transformation is invertible iff [T ] is invertible.
    [Show full text]
  • Math 54. Selected Solutions for Week 8 Section 6.1 (Page 282) 22. Let U = U1 U2 U3 . Explain Why U · U ≥ 0. Wh
    Math 54. Selected Solutions for Week 8 Section 6.1 (Page 282) 2 3 u1 22. Let ~u = 4 u2 5 . Explain why ~u · ~u ≥ 0 . When is ~u · ~u = 0 ? u3 2 2 2 We have ~u · ~u = u1 + u2 + u3 , which is ≥ 0 because it is a sum of squares (all of which are ≥ 0 ). It is zero if and only if ~u = ~0 . Indeed, if ~u = ~0 then ~u · ~u = 0 , as 2 can be seen directly from the formula. Conversely, if ~u · ~u = 0 then all the terms ui must be zero, so each ui must be zero. This implies ~u = ~0 . 2 5 3 26. Let ~u = 4 −6 5 , and let W be the set of all ~x in R3 such that ~u · ~x = 0 . What 7 theorem in Chapter 4 can be used to show that W is a subspace of R3 ? Describe W in geometric language. The condition ~u · ~x = 0 is equivalent to ~x 2 Nul ~uT , and this is a subspace of R3 by Theorem 2 on page 187. Geometrically, it is the plane perpendicular to ~u and passing through the origin. 30. Let W be a subspace of Rn , and let W ? be the set of all vectors orthogonal to W . Show that W ? is a subspace of Rn using the following steps. (a). Take ~z 2 W ? , and let ~u represent any element of W . Then ~z · ~u = 0 . Take any scalar c and show that c~z is orthogonal to ~u . (Since ~u was an arbitrary element of W , this will show that c~z is in W ? .) ? (b).
    [Show full text]
  • The Invertible Matrix Theorem II in Section 2.8, We Gave a List of Characterizations of Invertible Matrices (Theorem 2.8.1)
    i i “main” 2007/2/16 page 312 i i 312 CHAPTER 4 Vector Spaces For Problems 9–12, determine the solution set to Ax = b, 14. Show that a 6 × 4 matrix A with nullity(A) = 0 must and show that all solutions are of the form (4.9.3). have rowspace(A) = R4. Is colspace(A) = R4? 13−1 4 15. Prove that if rowspace(A) = nullspace(A), then A 9. A = 27 9 , b = 11 . contains an even number of columns. 15 21 10 16. Show that a 5×7 matrix A must have 2 ≤ nullity(A) ≤ 2 −114 5 7. Give an example of a 5 × 7 matrix A with 10. A = 1 −123 , b = 6 . nullity(A) = 2 and an example of a 5 × 7 matrix 1 −255 13 A with nullity(A) = 7. 11−2 −3 17. Show that 3 × 8 matrix A must have 5 ≤ nullity(A) ≤ 3 −1 −7 2 8. Give an example of a 3 × 8 matrix A with 11. A = , b = . 111 0 nullity(A) = 5 and an example of a 3 × 8 matrix 22−4 −6 A with nullity(A) = 8. 11−15 0 18. Prove that if A and B are n × n matrices and A is 12. A = 02−17 , b = 0 . invertible, then 42−313 0 nullity(AB) = nullity(B). 13. Show that a 3 × 7 matrix A with nullity(A) = 4 must have colspace(A) = R3. Is rowspace(A) = R3? [Hint: Bx = 0 if and only if ABx = 0.] 4.10 The Invertible Matrix Theorem II In Section 2.8, we gave a list of characterizations of invertible matrices (Theorem 2.8.1).
    [Show full text]
  • Rotation Matrix - Wikipedia, the Free Encyclopedia Page 1 of 22
    Rotation matrix - Wikipedia, the free encyclopedia Page 1 of 22 Rotation matrix From Wikipedia, the free encyclopedia In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix rotates points in the xy -Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details). In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometry, physics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space. Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, i.e. an orthogonal matrix whose determinant is 1: . The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1. Contents 1 Rotations in two dimensions 1.1 Non-standard orientation
    [Show full text]
  • • Rotations • Camera Calibration • Homography • Ransac
    Agenda • Rotations • Camera calibration • Homography • Ransac Geometric Transformations y 164 Computer Vision: Algorithms andx Applications (September 3, 2010 draft) Transformation Matrix # DoF Preserves Icon translation I t 2 orientation 2 3 h i ⇥ ⇢⇢SS rigid (Euclidean) R t 3 lengths S ⇢ 2 3 S⇢ ⇥ h i ⇢ similarity sR t 4 angles S 2 3 S⇢ h i ⇥ ⇥ ⇥ affine A 6 parallelism ⇥ ⇥ 2 3 h i ⇥ projective H˜ 8 straight lines ` 3 3 ` h i ⇥ Table 3.5 Hierarchy of 2D coordinate transformations. Each transformation also preserves Let’s definethe properties families listed of in thetransformations rows below it, i.e., similarity by the preserves properties not only anglesthat butthey also preserve parallelism and straight lines. The 2 3 matrices are extended with a third [0T 1] row to form ⇥ a full 3 3 matrix for homogeneous coordinate transformations. ⇥ amples of such transformations, which are based on the 2D geometric transformations shown in Figure 2.4. The formulas for these transformations were originally given in Table 2.1 and are reproduced here in Table 3.5 for ease of reference. In general, given a transformation specified by a formula x0 = h(x) and a source image f(x), how do we compute the values of the pixels in the new image g(x), as given in (3.88)? Think about this for a minute before proceeding and see if you can figure it out. If you are like most people, you will come up with an algorithm that looks something like Algorithm 3.1. This process is called forward warping or forward mapping and is shown in Figure 3.46a.
    [Show full text]
  • On Bounds and Closed Form Expressions for Capacities Of
    On Bounds and Closed Form Expressions for Capacities of Discrete Memoryless Channels with Invertible Positive Matrices Thuan Nguyen Thinh Nguyen School of Electrical and School of Electrical and Computer Engineering Computer Engineering Oregon State University Oregon State University Corvallis, OR, 97331 Corvallis, 97331 Email: [email protected] Email: [email protected] Abstract—While capacities of discrete memoryless channels are That said, it is still beneficial to find the channel capacity well studied, it is still not possible to obtain a closed form expres- in closed form expression for a number of reasons. These sion for the capacity of an arbitrary discrete memoryless channel. include (1) formulas can often provide a good intuition about This paper describes an elementary technique based on Karush- Kuhn-Tucker (KKT) conditions to obtain (1) a good upper bound the relationship between the capacity and different channel of a discrete memoryless channel having an invertible positive parameters, (2) formulas offer a faster way to determine the channel matrix and (2) a closed form expression for the capacity capacity than that of algorithms, and (3) formulas are useful if the channel matrix satisfies certain conditions related to its for analytical derivations where closed form expression of the singular value and its Gershgorin’s disk. capacity is needed in the intermediate steps. To that end, our Index Terms—Wireless Communication, Convex Optimization, Channel Capacity, Mutual Information. paper describes an elementary technique based on the theory of convex optimization, to find closed form expressions for (1) a new upper bound on capacities of discrete memoryless I. INTRODUCTION channels with positive invertible channel matrix and (2) the Discrete memoryless channels (DMC) play a critical role optimality conditions of the channel matrix such that the upper in the early development of information theory and its ap- bound is precisely the capacity.
    [Show full text]
  • Math 535 - General Topology Additional Notes
    Math 535 - General Topology Additional notes Martin Frankland September 5, 2012 1 Subspaces Definition 1.1. Let X be a topological space and A ⊆ X any subset. The subspace topology sub on A is the smallest topology TA making the inclusion map i: A,! X continuous. sub In other words, TA is generated by subsets V ⊆ A of the form V = i−1(U) = U \ A for any open U ⊆ X. Proposition 1.2. The subspace topology on A is sub TA = fV ⊆ A j V = U \ A for some open U ⊆ Xg: In other words, the collection of subsets of the form U \ A already forms a topology on A. 2 Products Before discussing the product of spaces, let us review the notion of product of sets. 2.1 Product of sets Let X and Y be sets. The Cartesian product of X and Y is the set of pairs X × Y = f(x; y) j x 2 X; y 2 Y g: It comes equipped with the two projection maps pX : X × Y ! X and pY : X × Y ! Y onto each factor, defined by pX (x; y) = x pY (x; y) = y: This explicit description of X × Y is made more meaningful by the following proposition. 1 Proposition 2.1. The Cartesian product of sets satisfies the following universal property. For any set Z along with maps fX : Z ! X and fY : Z ! Y , there is a unique map f : Z ! X × Y satisfying pX ◦ f = fX and pY ◦ f = fY , in other words making the diagram Z fX 9!f fY X × Y pX pY { " XY commute.
    [Show full text]
  • Chapter 9 Eigenvectors and Eigenvalues
    Chapter 9 Eigenvectors and Eigenvalues 9.1 Eigenvectors and Eigenvalues of a Linear Map Given a finite-dimensional vector space E,letf : E E ! be any linear map. If, by luck, there is a basis (e1,...,en) of E with respect to which f is represented by a diagonal matrix λ1 0 ... 0 0 λ ... D = 2 , 0 . ... ... 0 1 B 0 ... 0 λ C B nC @ A then the action of f on E is very simple; in every “direc- tion” ei,wehave f(ei)=λiei. 511 512 CHAPTER 9. EIGENVECTORS AND EIGENVALUES We can think of f as a transformation that stretches or shrinks space along the direction e1,...,en (at least if E is a real vector space). In terms of matrices, the above property translates into the fact that there is an invertible matrix P and a di- agonal matrix D such that a matrix A can be factored as 1 A = PDP− . When this happens, we say that f (or A)isdiagonaliz- able,theλisarecalledtheeigenvalues of f,andtheeis are eigenvectors of f. For example, we will see that every symmetric matrix can be diagonalized. 9.1. EIGENVECTORS AND EIGENVALUES OF A LINEAR MAP 513 Unfortunately, not every matrix can be diagonalized. For example, the matrix 11 A = 1 01 ✓ ◆ can’t be diagonalized. Sometimes, a matrix fails to be diagonalizable because its eigenvalues do not belong to the field of coefficients, such as 0 1 A = , 2 10− ✓ ◆ whose eigenvalues are i. ± This is not a serious problem because A2 can be diago- nalized over the complex numbers.
    [Show full text]
  • Math 395: Category Theory Northwestern University, Lecture Notes
    Math 395: Category Theory Northwestern University, Lecture Notes Written by Santiago Can˜ez These are lecture notes for an undergraduate seminar covering Category Theory, taught by the author at Northwestern University. The book we roughly follow is “Category Theory in Context” by Emily Riehl. These notes outline the specific approach we’re taking in terms the order in which topics are presented and what from the book we actually emphasize. We also include things we look at in class which aren’t in the book, but otherwise various standard definitions and examples are left to the book. Watch out for typos! Comments and suggestions are welcome. Contents Introduction to Categories 1 Special Morphisms, Products 3 Coproducts, Opposite Categories 7 Functors, Fullness and Faithfulness 9 Coproduct Examples, Concreteness 12 Natural Isomorphisms, Representability 14 More Representable Examples 17 Equivalences between Categories 19 Yoneda Lemma, Functors as Objects 21 Equalizers and Coequalizers 25 Some Functor Properties, An Equivalence Example 28 Segal’s Category, Coequalizer Examples 29 Limits and Colimits 29 More on Limits/Colimits 29 More Limit/Colimit Examples 30 Continuous Functors, Adjoints 30 Limits as Equalizers, Sheaves 30 Fun with Squares, Pullback Examples 30 More Adjoint Examples 30 Stone-Cech 30 Group and Monoid Objects 30 Monads 30 Algebras 30 Ultrafilters 30 Introduction to Categories Category theory provides a framework through which we can relate a construction/fact in one area of mathematics to a construction/fact in another. The goal is an ultimate form of abstraction, where we can truly single out what about a given problem is specific to that problem, and what is a reflection of a more general phenomenom which appears elsewhere.
    [Show full text]
  • The Invertible Matrix Theorem
    Math 240 TA: Shuyi Weng Winter 2017 February 6, 2017 The Invertible Matrix Theorem Theorem 1. Let A 2 Rn×n. Then the following statements are equivalent. 1. A is invertible. 2. A is row equivalent to In. 3. A has n pivots in its reduced echelon form. 4. The matrix equation Ax = 0 has only the trivial solution. 5. The columns of A are linearly independent. 6. The linear transformation T defined by T (x) = Ax is one-to-one. 7. The equation Ax = b has at least one solution for every b 2 Rn. 8. The columns of A span Rn. 9. The linear transformation T defined by T (x) = Ax is onto. 10. There exists an n × n matrix B such that AB = In. 11. There exists an n × n matrix C such that CA = In. 12. AT is invertible. Theorem 2. Let T : Rn ! Rn be defined by T (x) = Ax. Then T is one-to-one and onto if and only if A is an invertible matrix. Problem. True or false (all matrices are assumed to be n × n, unless otherwise specified). 1. The identity matrix is invertible. 2. If A can be row reduced to the identity matrix, then it is invertible. 3. If both A and B are invertible, so is AB. 4. If A is invertible, then the matrix equation Ax = b is consistent for every b 2 Rn. n 5. If A is an n × n matrix such that the equation Ax = ei is consistent for each ei 2 R a column of the n × n identity matrix, then A is invertible.
    [Show full text]
  • 9.2.2 Projection Formula Definition 2.8
    9.2. Intersection and morphisms 397 Proof Let Γ ⊆ X×S Z be the graph of ϕ. This is an S-scheme and the projection p :Γ → X is projective birational. Let us show that Γ admits a desingularization. This is Theorem 8.3.44 if dim S = 0. Let us therefore suppose that dim S = 1. Let K = K(S). Then pK :ΓK → XK is proper birational with XK normal, and hence pK is an isomorphism. We can therefore apply Theorem 8.3.50. Let g : Xe → Γ be a desingularization morphism, and f : Xe → X the composition p ◦ g. It now suffices to apply Theorem 2.2 to the morphism f. 9.2.2 Projection formula Definition 2.8. Let X, Y be Noetherian schemes, and let f : X → Y be a proper morphism. For any prime cycle Z on X (Section 7.2), we set W = f(Z) and ½ [K(Z): K(W )]W if K(Z) is finite over K(W ) f Z = ∗ 0 otherwise. By linearity, we define a homomorphism f∗ from the group of cycles on X to the group of cycles on Y . This generalizes Definition 7.2.17. It is clear that the construction of f∗ is compatible with the composition of morphisms. Remark 2.9. We can interpret the intersection of two divisors in terms of direct images of cycles. Let C, D be two Cartier divisors on a regular fibered surface X → S, of which at least one is vertical. Let us suppose that C is effective and has no common component with Supp D.
    [Show full text]