Overview Coordinates Different Bases Determine Different

Total Page:16

File Type:pdf, Size:1020Kb

Overview Coordinates Different Bases Determine Different Overview Last time we defined a basis of a vector space H: Definition The set v , , v is a basis for H if { 1 ··· p} v , , v is linearly independent, and { 1 ··· p} Span v , , v = H { 1 ··· p} We recalled algorithms (§2.8, §4.3) to find a basis for the null space and the column space of a matrix, and we stated the Unique Representation Theorem: Given a basis for H, every vector in H can be a written as a linear combination of basis vectors in a unique way. The coefficients of this expression are the coordinates of the vector with respect to the basis. Question Given bases and for H, how are [x] and [x] related? B C B C Dr Scott Morrison (ANU) MATH1014 Notes Second Semester 2015 1 / 29 (Lay, §4.4, §4.7) Coordinates Theorem (The Unique Representation Theorem) Suppose that = v ,..., v is a basis for a vector space V . Then each B { 1 n} x V has a unique expansion ∈ x = c v + c v (1) 1 1 ··· n n where c1,..., cn are in R. We say that the c are the coordinates of x relative to the basis , and we i B c1 write [x] = . B . cn Coordinates give instructions for writing a given vector as a linear combination of basis vectors. Dr Scott Morrison (ANU) MATH1014 Notes Second Semester 2015 2 / 29 Different bases determine different coordinates... 1 1 1 0 Suppose = , , and as always, = , . B {"0# "2# } E {"0# "1# } E E E E x x b2 e2 e1 b1 Standard graph B-graph paper 2 1 1 4 If [x] = , then x = 2b1 + 2b2 = 2 + 2 = B "2# "0# "2# "4# E E E 4 1 0 4 Similarly, [x] = , so x = 4e1 + 4e2 = 4 + 4 = E "4# "0# "1# "4# E E E Dr Scott Morrison (ANU) MATH1014 Notes Second Semester 2015 3 / 29 ...but some things stay the same Even though we use different coordinates to describe the same point with respect to different bases, the structures we see in the vector space are independent of the chosen coordinates. Definition A one-to-one and onto linear transformation between vector spaces is an isomorphism. If there is an isomorphism T : V V , we say that V and 1 → 2 1 V2 are isomorphic. Informally, we say that the vector space V is isomorphic to W if every vector space calculation in V is accurately reproduced in W , and vice versa. For example, the property of a set of vectors being linearly independent doesn’t depend on what coordinates they’re written in. Dr Scott Morrison (ANU) MATH1014 Notes Second Semester 2015 4 / 29 Isomorphism Theorem Let = b , b ,..., b be a basis for a vector space V . Then the B { 1 2 n} coordinate mapping P : V Rn defined by P(x) = [x] is an → B isomorphism. What does this theorem mean? V and Rn are both vector spaces, and we’re defining a specific map that takes vectors in V to vectors in Rn. This map ...is a linear transformation ...is one-to-one (i.e., if P(u) = 0, then u = 0) ...is onto (for every v Rn, there’s some u V with P(u) = v) ∈ ∈ Every vector space with an n-element basis is isomorphic to Rn. Dr Scott Morrison (ANU) MATH1014 Notes Second Semester 2015 5 / 29 Very Important Consequences If = b ,..., b is a basis for a vector space V then B { 1 n} A set of vectors u1, , up in V spans V if and only if the set of { ··· } n the coordinate vectors [u1] ,..., [up] spans R ; { B B} A set of vectors u , , u in V is linearly independent in V if and { 1 ··· p} only if the set of the coordinate vectors [u1] ,..., [up] is linearly { B B} independent in Rn. An indexed set of vectors u , , u in V is a basis for V if and { 1 ··· p} only if the set of the coordinate vectors [u1] ,..., [up] is a basis { B B} for Rn. Dr Scott Morrison (ANU) MATH1014 Notes Second Semester 2015 6 / 29 Theorem If a vector space V has a basis = b ,..., b , then any set in V B { 1 n} containing more than n vectors is linearly dependent. Theorem If a vector space V has a basis consisting of n vectors, then every basis of V must consist of exactly n vectors. That is, every basis for V has the same number of elements. This number is called the dimension of V and we’ll study it more tomorrow. Dr Scott Morrison (ANU) MATH1014 Notes Second Semester 2015 7 / 29 Changing Coordinates (Lay §4.7) When a basis is chosen for V , the associated coordinate mapping onto B Rn defines a coordinate system for V . Each x V is identified uniquely by ∈ its coordinate vector [x] . B In some applications, a problem is initially described by using a basis , B but by choosing a different basis , the problem can be greatly simplified C and easily solved. We want to study the relationship between [x] , [x] in Rn and the vector B C x in V . We’ll try to solve this problem in 2 different ways. Dr Scott Morrison (ANU) MATH1014 Notes Second Semester 2015 8 / 29 Changing from to coordinates: Approach #1 B C Example 1 Let = b , b and = c , c be bases for a vector space V , and B { 1 2} C { 1 2} suppose that b = c + 4c and b = 5c 3c . (2) 1 − 1 2 2 1 − 2 2 Further, suppose that [x] = for some vector x in V . What is [x] ? B "3# C Let’s try to solve this from the definitions of the objects: 2 Since [x] = we have B "3# x = 2b1 + 3b2. (3) Dr Scott Morrison (ANU) MATH1014 Notes Second Semester 2015 9 / 29 The coordinate mapping determined by is a linear transformation, so we C can apply it to equation (3): [x] = [2b1 + 3b2] C C = 2[b1] + 3[b2] C C We can write this vector equation as a matrix equation: 2 [x] = [b1] [b2] . (4) C C C "3# h i th Here the vector [bi ] becomes the i column of the matrix. C This formula gives us [x] once we know the columns of the matrix. But C from equation (2) we get 1 5 [b1] = − and [b2] = C 4 C 3 " # "− # Dr Scott Morrison (ANU) MATH1014 Notes Second Semester 2015 10 / 29 So the solution is 1 5 2 13 [x] = − = or C 4 3 3 1 " − #" # "− # [x] = P [x] C C←B B 1 5 where P = − is called the change of coordinate matrix from C←B 4 3 " − # basis to . B C Note that from equation (4), we have P = [b1] [b2] C←B C C h i Dr Scott Morrison (ANU) MATH1014 Notes Second Semester 2015 11 / 29 The argument used to derive the formula (4) can be generalised to give the following result. Theorem (2) Let = b ,..., b and = c ,..., c be bases for a vector space V . B { 1 n} C { 1 n} Then there is a unique n n matrix P such that × C←B [x] = P [x] . (5) C C←B B The columns of P are the -coordinate vectors of the vectors in the C←B C basis . That is B P = [b1] [b2] [bn] . (6) C←B C C ··· C h i Dr Scott Morrison (ANU) MATH1014 Notes Second Semester 2015 12 / 29 The matrix P in Theorem 12 is called the change of coordinate matrix from to C←B. B C Multiplication by P converts -coordinates into -coordinates. C←B B C Of course, [x] = P [x] , B B←C C so that [x] = P P [x] , B B←C C←B B whence P and P are inverses of each other. B←C C←B Dr Scott Morrison (ANU) MATH1014 Notes Second Semester 2015 13 / 29 Summary of Approach #1 The columns of P are the -coordinate vectors of the vectors C←B C in the basis . B Why is this true, and what’s a good way to remember this? Suppose = b ,..., b and = c ,..., c are bases for a vector B { 1 n} C { 1 n} space V . What is [b1] ? B 1 0 [b ] = . 1 . B . 0 We have [b1] = P [b1] , C C←B B so the first column of P needs to be the vector for b1 in coordinates. C←B C Dr Scott Morrison (ANU) MATH1014 Notes Second Semester 2015 14 / 29 Example Example 2 Find the change of coordinates matrices P and P for the bases C←B B←C = 1, x, x 2 and = 1 + x, x + x 2, 1 + x 2 B { } C { } of P2. Notice that it’s “easy" to write a vector in in coordinates. C B 1 0 1 [1 + x] = 1 , [x + x 2] = 1 , [1 + x 2] = 0 . B B B 0 1 1 Thus, 1 0 1 P = 1 1 0 . B←C 0 1 1 Dr Scott Morrison (ANU) MATH1014 Notes Second Semester 2015 15 / 29 Example 3 (continued) Find the change of choordinates matrices P and P for the bases C←B B←C = 1, x, x 2 and = 1 + x, x + x 2, 1 + x 2 B { } C { } of P2. Since we just showed 1 0 1 P = 1 1 0 , B←C 0 1 1 we have 1/2 1/2 1/2 1 − − P = P = 1/2 1/2 1/2 . C←B B←C − 1/2 1/2 1/2 − Dr Scott Morrison (ANU) MATH1014 Notes Second Semester 2015 16 / 29 Suppose now that we have a polynomial p(x) = 1 + 2x 3x 2 and we want − to find its coordinates relative to the basis.
Recommended publications
  • Introduction to Linear Bialgebra
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of New Mexico University of New Mexico UNM Digital Repository Mathematics and Statistics Faculty and Staff Publications Academic Department Resources 2005 INTRODUCTION TO LINEAR BIALGEBRA Florentin Smarandache University of New Mexico, [email protected] W.B. Vasantha Kandasamy K. Ilanthenral Follow this and additional works at: https://digitalrepository.unm.edu/math_fsp Part of the Algebra Commons, Analysis Commons, Discrete Mathematics and Combinatorics Commons, and the Other Mathematics Commons Recommended Citation Smarandache, Florentin; W.B. Vasantha Kandasamy; and K. Ilanthenral. "INTRODUCTION TO LINEAR BIALGEBRA." (2005). https://digitalrepository.unm.edu/math_fsp/232 This Book is brought to you for free and open access by the Academic Department Resources at UNM Digital Repository. It has been accepted for inclusion in Mathematics and Statistics Faculty and Staff Publications by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected], [email protected], [email protected]. INTRODUCTION TO LINEAR BIALGEBRA W. B. Vasantha Kandasamy Department of Mathematics Indian Institute of Technology, Madras Chennai – 600036, India e-mail: [email protected] web: http://mat.iitm.ac.in/~wbv Florentin Smarandache Department of Mathematics University of New Mexico Gallup, NM 87301, USA e-mail: [email protected] K. Ilanthenral Editor, Maths Tiger, Quarterly Journal Flat No.11, Mayura Park, 16, Kazhikundram Main Road, Tharamani, Chennai – 600 113, India e-mail: [email protected] HEXIS Phoenix, Arizona 2005 1 This book can be ordered in a paper bound reprint from: Books on Demand ProQuest Information & Learning (University of Microfilm International) 300 N.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Vectors, Matrices and Coordinate Transformations
    S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between them, physical laws can often be written in a simple form. Since we will making extensive use of vectors in Dynamics, we will summarize some of their important properties. Vectors For our purposes we will think of a vector as a mathematical representation of a physical entity which has both magnitude and direction in a 3D space. Examples of physical vectors are forces, moments, and velocities. Geometrically, a vector can be represented as arrows. The length of the arrow represents its magnitude. Unless indicated otherwise, we shall assume that parallel translation does not change a vector, and we shall call the vectors satisfying this property, free vectors. Thus, two vectors are equal if and only if they are parallel, point in the same direction, and have equal length. Vectors are usually typed in boldface and scalar quantities appear in lightface italic type, e.g. the vector quantity A has magnitude, or modulus, A = |A|. In handwritten text, vectors are often expressed using the −→ arrow, or underbar notation, e.g. A , A. Vector Algebra Here, we introduce a few useful operations which are defined for free vectors. Multiplication by a scalar If we multiply a vector A by a scalar α, the result is a vector B = αA, which has magnitude B = |α|A. The vector B, is parallel to A and points in the same direction if α > 0.
    [Show full text]
  • MA 242 LINEAR ALGEBRA C1, Solutions to Second Midterm Exam
    MA 242 LINEAR ALGEBRA C1, Solutions to Second Midterm Exam Prof. Nikola Popovic, November 9, 2006, 09:30am - 10:50am Problem 1 (15 points). Let the matrix A be given by 1 −2 −1 2 −1 5 6 3 : 5 −4 5 4 5 (a) Find the inverse A−1 of A, if it exists. (b) Based on your answer in (a), determine whether the columns of A span R3. (Justify your answer!) Solution. (a) To check whether A is invertible, we row reduce the augmented matrix [A I3]: 1 −2 −1 1 0 0 1 −2 −1 1 0 0 2 −1 5 6 0 1 0 3 ∼ : : : ∼ 2 0 3 5 1 1 0 3 : 5 −4 5 0 0 1 0 0 0 −7 −2 1 4 5 4 5 Since the last row in the echelon form of A contains only zeros, A is not row equivalent to I3. Hence, A is not invertible, and A−1 does not exist. (b) Since A is not invertible by (a), the Invertible Matrix Theorem says that the columns of A cannot span R3. Problem 2 (15 points). Let the vectors b1; : : : ; b4 be defined by 3 2 −1 0 0 5 1 0 −1 1 0 1 1 0 0 1 b1 = ; b2 = ; b3 = ; and b4 = : −2 −5 3 0 B C B C B C B C B 4 C B 7 C B 0 C B −3 C @ A @ A @ A @ A (a) Determine if the set B = fb1; b2; b3; b4g is linearly independent by computing the determi- nant of the matrix B = [b1 b2 b3 b4].
    [Show full text]
  • Bases for Infinite Dimensional Vector Spaces Math 513 Linear Algebra Supplement
    BASES FOR INFINITE DIMENSIONAL VECTOR SPACES MATH 513 LINEAR ALGEBRA SUPPLEMENT Professor Karen E. Smith We have proven that every finitely generated vector space has a basis. But what about vector spaces that are not finitely generated, such as the space of all continuous real valued functions on the interval [0; 1]? Does such a vector space have a basis? By definition, a basis for a vector space V is a linearly independent set which generates V . But we must be careful what we mean by linear combinations from an infinite set of vectors. The definition of a vector space gives us a rule for adding two vectors, but not for adding together infinitely many vectors. By successive additions, such as (v1 + v2) + v3, it makes sense to add any finite set of vectors, but in general, there is no way to ascribe meaning to an infinite sum of vectors in a vector space. Therefore, when we say that a vector space V is generated by or spanned by an infinite set of vectors fv1; v2;::: g, we mean that each vector v in V is a finite linear combination λi1 vi1 + ··· + λin vin of the vi's. Likewise, an infinite set of vectors fv1; v2;::: g is said to be linearly independent if the only finite linear combination of the vi's that is zero is the trivial linear combination. So a set fv1; v2; v3;:::; g is a basis for V if and only if every element of V can be be written in a unique way as a finite linear combination of elements from the set.
    [Show full text]
  • Determinants Math 122 Calculus III D Joyce, Fall 2012
    Determinants Math 122 Calculus III D Joyce, Fall 2012 What they are. A determinant is a value associated to a square array of numbers, that square array being called a square matrix. For example, here are determinants of a general 2 × 2 matrix and a general 3 × 3 matrix. a b = ad − bc: c d a b c d e f = aei + bfg + cdh − ceg − afh − bdi: g h i The determinant of a matrix A is usually denoted jAj or det (A). You can think of the rows of the determinant as being vectors. For the 3×3 matrix above, the vectors are u = (a; b; c), v = (d; e; f), and w = (g; h; i). Then the determinant is a value associated to n vectors in Rn. There's a general definition for n×n determinants. It's a particular signed sum of products of n entries in the matrix where each product is of one entry in each row and column. The two ways you can choose one entry in each row and column of the 2 × 2 matrix give you the two products ad and bc. There are six ways of chosing one entry in each row and column in a 3 × 3 matrix, and generally, there are n! ways in an n × n matrix. Thus, the determinant of a 4 × 4 matrix is the signed sum of 24, which is 4!, terms. In this general definition, half the terms are taken positively and half negatively. In class, we briefly saw how the signs are determined by permutations.
    [Show full text]
  • 28. Exterior Powers
    28. Exterior powers 28.1 Desiderata 28.2 Definitions, uniqueness, existence 28.3 Some elementary facts 28.4 Exterior powers Vif of maps 28.5 Exterior powers of free modules 28.6 Determinants revisited 28.7 Minors of matrices 28.8 Uniqueness in the structure theorem 28.9 Cartan's lemma 28.10 Cayley-Hamilton Theorem 28.11 Worked examples While many of the arguments here have analogues for tensor products, it is worthwhile to repeat these arguments with the relevant variations, both for practice, and to be sensitive to the differences. 1. Desiderata Again, we review missing items in our development of linear algebra. We are missing a development of determinants of matrices whose entries may be in commutative rings, rather than fields. We would like an intrinsic definition of determinants of endomorphisms, rather than one that depends upon a choice of coordinates, even if we eventually prove that the determinant is independent of the coordinates. We anticipate that Artin's axiomatization of determinants of matrices should be mirrored in much of what we do here. We want a direct and natural proof of the Cayley-Hamilton theorem. Linear algebra over fields is insufficient, since the introduction of the indeterminate x in the definition of the characteristic polynomial takes us outside the class of vector spaces over fields. We want to give a conceptual proof for the uniqueness part of the structure theorem for finitely-generated modules over principal ideal domains. Multi-linear algebra over fields is surely insufficient for this. 417 418 Exterior powers 2. Definitions, uniqueness, existence Let R be a commutative ring with 1.
    [Show full text]
  • Chapter 5 ANGULAR MOMENTUM and ROTATIONS
    Chapter 5 ANGULAR MOMENTUM AND ROTATIONS In classical mechanics the total angular momentum L~ of an isolated system about any …xed point is conserved. The existence of a conserved vector L~ associated with such a system is itself a consequence of the fact that the associated Hamiltonian (or Lagrangian) is invariant under rotations, i.e., if the coordinates and momenta of the entire system are rotated “rigidly” about some point, the energy of the system is unchanged and, more importantly, is the same function of the dynamical variables as it was before the rotation. Such a circumstance would not apply, e.g., to a system lying in an externally imposed gravitational …eld pointing in some speci…c direction. Thus, the invariance of an isolated system under rotations ultimately arises from the fact that, in the absence of external …elds of this sort, space is isotropic; it behaves the same way in all directions. Not surprisingly, therefore, in quantum mechanics the individual Cartesian com- ponents Li of the total angular momentum operator L~ of an isolated system are also constants of the motion. The di¤erent components of L~ are not, however, compatible quantum observables. Indeed, as we will see the operators representing the components of angular momentum along di¤erent directions do not generally commute with one an- other. Thus, the vector operator L~ is not, strictly speaking, an observable, since it does not have a complete basis of eigenstates (which would have to be simultaneous eigenstates of all of its non-commuting components). This lack of commutivity often seems, at …rst encounter, as somewhat of a nuisance but, in fact, it intimately re‡ects the underlying structure of the three dimensional space in which we are immersed, and has its source in the fact that rotations in three dimensions about di¤erent axes do not commute with one another.
    [Show full text]
  • Solving the Geodesic Equation
    Solving the Geodesic Equation Jeremy Atkins December 12, 2018 Abstract We find the general form of the geodesic equation and discuss the closed form relation to find Christoffel symbols. We then show how to use metric independence to find Killing vector fields, which allow us to solve the geodesic equation when there are helpful symmetries. We also discuss a more general way to find Killing vector fields, and some of their properties as a Lie algebra. 1 The Variational Method We will exploit the following variational principle to characterize motion in general relativity: The world line of a free test particle between two timelike separated points extremizes the proper time between them. where a test particle is one that is not a significant source of spacetime cur- vature, and a free particles is one that is only under the influence of curved spacetime. Similarly to classical Lagrangian mechanics, we can use this to de- duce the equations of motion for a metric. The proper time along a timeline worldline between point A and point B for the metric gµν is given by Z B Z B µ ν 1=2 τAB = dτ = (−gµν (x)dx dx ) (1) A A using the Einstein summation notation, and µ, ν = 0; 1; 2; 3. We can parame- terize the four coordinates with the parameter σ where σ = 0 at A and σ = 1 at B. This gives us the following equation for the proper time: Z 1 dxµ dxν 1=2 τAB = dσ −gµν (x) (2) 0 dσ dσ We can treat the integrand as a Lagrangian, dxµ dxν 1=2 L = −gµν (x) (3) dσ dσ and it's clear that the world lines extremizing proper time are those that satisfy the Euler-Lagrange equation: @L d @L − = 0 (4) @xµ dσ @(dxµ/dσ) 1 These four equations together give the equation for the worldline extremizing the proper time.
    [Show full text]
  • Geodetic Position Computations
    GEODETIC POSITION COMPUTATIONS E. J. KRAKIWSKY D. B. THOMSON February 1974 TECHNICALLECTURE NOTES REPORT NO.NO. 21739 PREFACE In order to make our extensive series of lecture notes more readily available, we have scanned the old master copies and produced electronic versions in Portable Document Format. The quality of the images varies depending on the quality of the originals. The images have not been converted to searchable text. GEODETIC POSITION COMPUTATIONS E.J. Krakiwsky D.B. Thomson Department of Geodesy and Geomatics Engineering University of New Brunswick P.O. Box 4400 Fredericton. N .B. Canada E3B5A3 February 197 4 Latest Reprinting December 1995 PREFACE The purpose of these notes is to give the theory and use of some methods of computing the geodetic positions of points on a reference ellipsoid and on the terrain. Justification for the first three sections o{ these lecture notes, which are concerned with the classical problem of "cCDputation of geodetic positions on the surface of an ellipsoid" is not easy to come by. It can onl.y be stated that the attempt has been to produce a self contained package , cont8.i.ning the complete development of same representative methods that exist in the literature. The last section is an introduction to three dimensional computation methods , and is offered as an alternative to the classical approach. Several problems, and their respective solutions, are presented. The approach t~en herein is to perform complete derivations, thus stqing awrq f'rcm the practice of giving a list of for11111lae to use in the solution of' a problem.
    [Show full text]
  • Coordinatization
    MATH 355 Supplemental Notes Coordinatization Coordinatization In R3, we have the standard basis i, j and k. When we write a vector in coordinate form, say 3 v 2 , (1) “ »´ fi 5 — ffi – fl it is understood as v 3i 2j 5k. “ ´ ` The numbers 3, 2 and 5 are the coordinates of v relative to the standard basis ⇠ i, j, k . It has p´ q “p q always been understood that a coordinate representation such as that in (1) is with respect to the ordered basis ⇠. A little thought reveals that it need not be so. One could have chosen the same basis elements in a di↵erent order, as in the basis ⇠ i, k, j . We employ notation indicating the 1 “p q coordinates are with respect to the di↵erent basis ⇠1: 3 v 5 , to mean that v 3i 5k 2j, r s⇠1 “ » fi “ ` ´ 2 —´ ffi – fl reflecting the order in which the basis elements fall in ⇠1. Of course, one could employ similar notation even when the coordinates are expressed in terms of the standard basis, writing v for r s⇠ (1), but whenever we have coordinatization with respect to the standard basis of Rn in mind, we will consider the wrapper to be optional. r¨s⇠ Of course, there are many non-standard bases of Rn. In fact, any linearly independent collection of n vectors in Rn provides a basis. Say we take 1 1 1 4 ´ ´ » 0fi » 1fi » 1fi » 1fi u , u u u ´ . 1 “ 2 “ 3 “ 4 “ — 3ffi — 1ffi — 0ffi — 2ffi — ffi —´ ffi — ffi — ffi — 0ffi — 4ffi — 2ffi — 1ffi — ffi — ffi — ffi —´ ffi – fl – fl – fl – fl As per the discussion above, these vectors are being expressed relative to the standard basis of R4.
    [Show full text]
  • Rotation Matrix - Wikipedia, the Free Encyclopedia Page 1 of 22
    Rotation matrix - Wikipedia, the free encyclopedia Page 1 of 22 Rotation matrix From Wikipedia, the free encyclopedia In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix rotates points in the xy -Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details). In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometry, physics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space. Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, i.e. an orthogonal matrix whose determinant is 1: . The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1. Contents 1 Rotations in two dimensions 1.1 Non-standard orientation
    [Show full text]