Math 480 Notes on Orthogonality the Word Orthogonal Is a Synonym for Perpendicular. Question 1: When Are Two Vectors V 1 and V2

Total Page:16

File Type:pdf, Size:1020Kb

Math 480 Notes on Orthogonality the Word Orthogonal Is a Synonym for Perpendicular. Question 1: When Are Two Vectors V 1 and V2 Math 480 Notes on Orthogonality The word orthogonal is a synonym for perpendicular. n Question 1: When are two vectors ~v1 and ~v2 in R orthogonal to one another? The most basic answer is \if the angle between them is 90◦" but this is not very practical. How could you tell whether the vectors 0 1 1 0 1 1 @ 1 A and @ 3 A 1 1 are at 90◦ from one another? One way to think about this is as follows: ~v1 and ~v2 are orthogonal if and only if the triangle formed by ~v1, ~v2, and ~v1 − ~v2 (drawn with its tail at ~v2 and its head at ~v1) is a right triangle. The Pythagorean Theorem then tells us that this triangle is a right triangle if and only if 2 2 2 (1) jj~v1jj + jj~v2jj = jj~v1 − ~v2jj ; where jj − jj denotes the length of a vector. 0 x1 1 . The length of a vector ~x = @ . A is easy to measure: the Pythagorean Theorem (once again) xn tells us that q 2 2 jj~xjj = x1 + ··· + xn: This expression under the square root is simply the matrix product 0 x1 1 T . ~x ~x = (x1 ··· xn) @ . A : xn Definition. The inner product (also called the dot product) of two vectors ~x;~y 2 Rn, written h~x;~yi or ~x · ~y, is defined by n T X hx; yi = ~x ~y = xiyi: i=1 Since matrix multiplication is linear, inner products satisfy h~x;~y1 + ~y2i = h~x;~y1i + h~x;~y2i h~x1; a~yi = ah~x1; ~yi: (Similar formulas hold in the first coordinate, since h~x;~yi = h~y; ~xi.) Now we can write 2 2 jj~v1 − ~v2jj = h~v1 − ~v2;~v1 − ~v2i = h~v1;~v1i − 2h~v1;~v2i + h~v2;~v2i = jj~v1jj − 2h~v1;~v2i + jj~v2jj; so Equation (1) holds if and only if h~v1;~v2i = 0: n Answer to Question 1: Vectors ~v1 and ~v2 in R are orthogonal if and only if h~v1;~v2i = 0. Exercise 1: Which of the following pairs of vectors are orthogonal to one another? Draw pictures to check your answers. i) 1 −2 ; 2 1 ii) 0 1 1 0 1 1 @ 1 A ; @ −1 A 3 0 iii) 0 1 1 0 2 1 @ 1 A ; @ 1 A 1 3 Exercise 2: Find two orthogonal vectors in R6 all of whose coordinates are non-zero. Definition. Given a subspace S ⊂ Rn, the orthogonal complement of S, written S?, is the subspace consisting of all vectors ~v 2 Rn that are orthogonal to every ~s 2 S. Theorem 1. If S is a subspace of Rn and dim(S) = k, then dim(S?) = n − k. The basic idea here is that every vector in Rn can be built up from vectors in S and vectors in S?, and these subspaces do not overlap. Think about the case of R3: the orthogonal complement of a line (a 1-dimensional subspace) is a plane (a 2-dimensional subspace) and vice versa. Key Example: Given an m × n matrix A 2 Rm×n, the orthogonal complement of the row space Row(A) is precisely N(A). Why is this? The definition of matrix multiplication shows that being in the nullspace of A is exactly the same as being orthogonal to every row of A: recall that if 2 ~a1 3 6 ~a2 7 A = 6 . 7 4 . 5 ~am and ~x 2 Rn, then the product A~x is given by 2 ~a1 · ~x 3 6 ~a2 · ~x 7 A~x = 6 . 7 : 4 . 5 ~am · ~x Now, notice how the theorem fits with what we know about the fundamental subspaces: if dim N(A) = k, then there are k free variables and n − k pivot variables in the system A~x = ~0. Hence dim Row(A) = n − k. So the dimensions of N(A) and its orthogonal complement Row(A) add to n, as claimed by the Theorem. This argument actually proves the Theorem in general: every subspace S in Rn has a basis ~s1; ~s2; : : : ; ~sm (for some m 6 1), and S is then equal to the row space of the matrix 2 T 3 ~s1 T 6 ~s2 7 A = 6 . 7 : 4 . 5 T ~sm The statement that S contains a finite basis deserves some explanation, and will be considered in detail below. Another Key Example: Given an m × n matrix A 2 Rm×n, the orthogonal complement of the column space Col(A) is precisely N(AT ). This follows by the same sort of argument as for the first Key Example. The following theorem should seem geometrically obvious, but it is annoyingly difficult to prove directly. Theorem 2. If V and W are subspaces of Rn and V = W ?, then W = V ? as well. Proof. One half of this statement really is easy: if V is the orthogonal complement of W , this means V consists of all vectors ~v 2 Rn such that for all ~w 2 W , ~v · ~w = 0. Now if ~w 2 W , then this means ~w is definitely perpendicular to every ~v 2 V (i.e. ~v · ~w = 0), and hence W ⊂ V ?. But why must every vector that is orthogonal to all of V actually lie in W ? We can prove this using what we know about dimensions. Say V is k{dimensional. Then the dimension of V ? is n − k by Theorem 1. But Theorem 1 also tells us that the dimension of W is n − k (because V = W ?). So W is an (n−k){dimensional subspace of the (n−k){dimensional space V ?, and from Section 5 of the Notes ? of Linear Independence, Bases, and Dimension, we know that W must in fact be all of V . Corollary. For any matrix A, Col(A) = N(AT )?. Note that this statement has a nice implication for linear systems: the column space Col(A) consists of all vectors ~b such that A~x = ~b has a solution. If you want to check whether A~x = ~b has a solution, you can now just check whether or not ~b is perpendicular to all vectors in N(AT ). Sometimes this is easy to check - for instance, if you have a basis for N(AT ). (Note that if a vector ~w is perpendicular to each vector in a basis for some subspace V , then ~w is in fact perpendicular to all linear combinations of these basis vectors, so ~w 2 V ?. This raises question: how can we find a basis for N(AT )? Row reduction gives rise to an equation EA = R; where R is the reduced echelon form of A and E is a product of elementary matrices and permutation matrices (corresponding the the row operations performed on A). Say R has k rows of zeros. Note that the dimension of N(AT ) is precisely the number of rows of zeros in R (why?), so we are looking for an independent set of vectors in N(AT ) of size k. If you look at the last k rows in the matrix equation EA = R, you'll see that this equation says that the last k rows of E lie in the left-hand nullspace N(AT ). Moreover, these vectors are independent, because E is a product of invertible matrices, hence invertible (so its rows are independent). We now consider in detail the question of why every subspace of Rn has a basis. Theorem 3. If S is a subspace of Rn, then S has a basis containing at most n elements. Equiva- lently, dim(S) 6 n. Proof. First, recall that every set of n + 1 (or more) vectors in Rn is linearly dependent, since they form the columns of a matrix with more columns than rows. So every sufficiently large set of vectors in S is dependent. Let k be the smallest number between 1 and n + 1 such that every set of k vectors in S is dependent. If k = 1, then every vector ~s 2 S forms a dependent set (all by itself) so S must contain only the zero vector. In this case, the zero vector forms a spanning set for S. So we'll assume k > 1. Then there is a set ~s1; : : : ; ~sk−1 of vectors in S which is linearly independent, and every larger set in S is dependent. We claim that this set actually spans S (and hence is a basis for S). The proof will be by contradiction, meaning that we'll consider what would happen if this set did not span S, and we'll see that this would lead to a contradiction. If this set did not span S, then there would be a vector ~s 2 S that is not a linear combination of the vectors ~s1; : : : ; ~sk−1. We claim that this makes the set ~s1; : : : ; ~sk−1; ~s linearly independent. Say (2) c1~s1 + ··· + ck−1~sk−1 + ck~s = ~0: We will prove that all the scalars ci must be zero. If ck were non-zero, then we could solve the above equation for ~s, yielding c1 ck−1 ~s = − ~s1 − · · · − ~sk−1: ck ck But that's impossible, since ~s is not a linear combination of the vectors ~s1; : : : ; ~sk−1! So ck is zero, and the equation (2) becomes c1~s1 + ··· + ck−1~sk−1 = ~0: Since ~s1; : : : ; ~sk−1 is independent, the rest of the ci must be zero as well. We have now shown that ~s1; : : : ; ~sk−1; ~s is a linearly independent set in S, but this contradicts the assumption that all sets of size k in S are dependent.
Recommended publications
  • Introduction to Linear Bialgebra
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of New Mexico University of New Mexico UNM Digital Repository Mathematics and Statistics Faculty and Staff Publications Academic Department Resources 2005 INTRODUCTION TO LINEAR BIALGEBRA Florentin Smarandache University of New Mexico, [email protected] W.B. Vasantha Kandasamy K. Ilanthenral Follow this and additional works at: https://digitalrepository.unm.edu/math_fsp Part of the Algebra Commons, Analysis Commons, Discrete Mathematics and Combinatorics Commons, and the Other Mathematics Commons Recommended Citation Smarandache, Florentin; W.B. Vasantha Kandasamy; and K. Ilanthenral. "INTRODUCTION TO LINEAR BIALGEBRA." (2005). https://digitalrepository.unm.edu/math_fsp/232 This Book is brought to you for free and open access by the Academic Department Resources at UNM Digital Repository. It has been accepted for inclusion in Mathematics and Statistics Faculty and Staff Publications by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected], [email protected], [email protected]. INTRODUCTION TO LINEAR BIALGEBRA W. B. Vasantha Kandasamy Department of Mathematics Indian Institute of Technology, Madras Chennai – 600036, India e-mail: [email protected] web: http://mat.iitm.ac.in/~wbv Florentin Smarandache Department of Mathematics University of New Mexico Gallup, NM 87301, USA e-mail: [email protected] K. Ilanthenral Editor, Maths Tiger, Quarterly Journal Flat No.11, Mayura Park, 16, Kazhikundram Main Road, Tharamani, Chennai – 600 113, India e-mail: [email protected] HEXIS Phoenix, Arizona 2005 1 This book can be ordered in a paper bound reprint from: Books on Demand ProQuest Information & Learning (University of Microfilm International) 300 N.
    [Show full text]
  • A New Description of Space and Time Using Clifford Multivectors
    A new description of space and time using Clifford multivectors James M. Chappell† , Nicolangelo Iannella† , Azhar Iqbal† , Mark Chappell‡ , Derek Abbott† †School of Electrical and Electronic Engineering, University of Adelaide, South Australia 5005, Australia ‡Griffith Institute, Griffith University, Queensland 4122, Australia Abstract Following the development of the special theory of relativity in 1905, Minkowski pro- posed a unified space and time structure consisting of three space dimensions and one time dimension, with relativistic effects then being natural consequences of this space- time geometry. In this paper, we illustrate how Clifford’s geometric algebra that utilizes multivectors to represent spacetime, provides an elegant mathematical framework for the study of relativistic phenomena. We show, with several examples, how the application of geometric algebra leads to the correct relativistic description of the physical phenomena being considered. This approach not only provides a compact mathematical representa- tion to tackle such phenomena, but also suggests some novel insights into the nature of time. Keywords: Geometric algebra, Clifford space, Spacetime, Multivectors, Algebraic framework 1. Introduction The physical world, based on early investigations, was deemed to possess three inde- pendent freedoms of translation, referred to as the three dimensions of space. This naive conclusion is also supported by more sophisticated analysis such as the existence of only five regular polyhedra and the inverse square force laws. If we lived in a world with four spatial dimensions, for example, we would be able to construct six regular solids, and in arXiv:1205.5195v2 [math-ph] 11 Oct 2012 five dimensions and above we would find only three [1].
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Linear Algebra for Dummies
    Linear Algebra for Dummies Jorge A. Menendez October 6, 2017 Contents 1 Matrices and Vectors1 2 Matrix Multiplication2 3 Matrix Inverse, Pseudo-inverse4 4 Outer products 5 5 Inner Products 5 6 Example: Linear Regression7 7 Eigenstuff 8 8 Example: Covariance Matrices 11 9 Example: PCA 12 10 Useful resources 12 1 Matrices and Vectors An m × n matrix is simply an array of numbers: 2 3 a11 a12 : : : a1n 6 a21 a22 : : : a2n 7 A = 6 7 6 . 7 4 . 5 am1 am2 : : : amn where we define the indexing Aij = aij to designate the component in the ith row and jth column of A. The transpose of a matrix is obtained by flipping the rows with the columns: 2 3 a11 a21 : : : an1 6 a12 a22 : : : an2 7 AT = 6 7 6 . 7 4 . 5 a1m a2m : : : anm T which evidently is now an n × m matrix, with components Aij = Aji = aji. In other words, the transpose is obtained by simply flipping the row and column indeces. One particularly important matrix is called the identity matrix, which is composed of 1’s on the diagonal and 0’s everywhere else: 21 0 ::: 03 60 1 ::: 07 6 7 6. .. .7 4. .5 0 0 ::: 1 1 It is called the identity matrix because the product of any matrix with the identity matrix is identical to itself: AI = A In other words, I is the equivalent of the number 1 for matrices. For our purposes, a vector can simply be thought of as a matrix with one column1: 2 3 a1 6a2 7 a = 6 7 6 .
    [Show full text]
  • Lecture 4: April 8, 2021 1 Orthogonality and Orthonormality
    Mathematical Toolkit Spring 2021 Lecture 4: April 8, 2021 Lecturer: Avrim Blum (notes based on notes from Madhur Tulsiani) 1 Orthogonality and orthonormality Definition 1.1 Two vectors u, v in an inner product space are said to be orthogonal if hu, vi = 0. A set of vectors S ⊆ V is said to consist of mutually orthogonal vectors if hu, vi = 0 for all u 6= v, u, v 2 S. A set of S ⊆ V is said to be orthonormal if hu, vi = 0 for all u 6= v, u, v 2 S and kuk = 1 for all u 2 S. Proposition 1.2 A set S ⊆ V n f0V g consisting of mutually orthogonal vectors is linearly inde- pendent. Proposition 1.3 (Gram-Schmidt orthogonalization) Given a finite set fv1,..., vng of linearly independent vectors, there exists a set of orthonormal vectors fw1,..., wng such that Span (fw1,..., wng) = Span (fv1,..., vng) . Proof: By induction. The case with one vector is trivial. Given the statement for k vectors and orthonormal fw1,..., wkg such that Span (fw1,..., wkg) = Span (fv1,..., vkg) , define k u + u = v − hw , v i · w and w = k 1 . k+1 k+1 ∑ i k+1 i k+1 k k i=1 uk+1 We can now check that the set fw1,..., wk+1g satisfies the required conditions. Unit length is clear, so let’s check orthogonality: k uk+1, wj = vk+1, wj − ∑ hwi, vk+1i · wi, wj = vk+1, wj − wj, vk+1 = 0. i=1 Corollary 1.4 Every finite dimensional inner product space has an orthonormal basis.
    [Show full text]
  • Math 2331 – Linear Algebra 6.2 Orthogonal Sets
    6.2 Orthogonal Sets Math 2331 { Linear Algebra 6.2 Orthogonal Sets Jiwen He Department of Mathematics, University of Houston [email protected] math.uh.edu/∼jiwenhe/math2331 Jiwen He, University of Houston Math 2331, Linear Algebra 1 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix 6.2 Orthogonal Sets Orthogonal Sets: Examples Orthogonal Sets: Theorem Orthogonal Basis: Examples Orthogonal Basis: Theorem Orthogonal Projections Orthonormal Sets Orthonormal Matrix: Examples Orthonormal Matrix: Theorems Jiwen He, University of Houston Math 2331, Linear Algebra 2 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix Orthogonal Sets Orthogonal Sets n A set of vectors fu1; u2;:::; upg in R is called an orthogonal set if ui · uj = 0 whenever i 6= j. Example 82 3 2 3 2 39 < 1 1 0 = Is 4 −1 5 ; 4 1 5 ; 4 0 5 an orthogonal set? : 0 0 1 ; Solution: Label the vectors u1; u2; and u3 respectively. Then u1 · u2 = u1 · u3 = u2 · u3 = Therefore, fu1; u2; u3g is an orthogonal set. Jiwen He, University of Houston Math 2331, Linear Algebra 3 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix Orthogonal Sets: Theorem Theorem (4) Suppose S = fu1; u2;:::; upg is an orthogonal set of nonzero n vectors in R and W =spanfu1; u2;:::; upg. Then S is a linearly independent set and is therefore a basis for W . Partial Proof: Suppose c1u1 + c2u2 + ··· + cpup = 0 (c1u1 + c2u2 + ··· + cpup) · = 0· (c1u1) · u1 + (c2u2) · u1 + ··· + (cpup) · u1 = 0 c1 (u1 · u1) + c2 (u2 · u1) + ··· + cp (up · u1) = 0 c1 (u1 · u1) = 0 Since u1 6= 0, u1 · u1 > 0 which means c1 = : In a similar manner, c2,:::,cp can be shown to by all 0.
    [Show full text]
  • Orthogonality Handout
    3.8 (SUPPLEMENT) | ORTHOGONALITY OF EIGENFUNCTIONS We now develop some properties of eigenfunctions, to be used in Chapter 9 for Fourier Series and Partial Differential Equations. 1. Definition of Orthogonality R b We say functions f(x) and g(x) are orthogonal on a < x < b if a f(x)g(x) dx = 0 . [Motivation: Let's approximate the integral with a Riemann sum, as follows. Take a large integer N, put h = (b − a)=N and partition the interval a < x < b by defining x1 = a + h; x2 = a + 2h; : : : ; xN = a + Nh = b. Then Z b f(x)g(x) dx ≈ f(x1)g(x1)h + ··· + f(xN )g(xN )h a = (uN · vN )h where uN = (f(x1); : : : ; f(xN )) and vN = (g(x1); : : : ; g(xN )) are vectors containing the values of f and g. The vectors uN and vN are said to be orthogonal (or perpendicular) if their dot product equals zero (uN ·vN = 0), and so when we let N ! 1 in the above formula it makes R b sense to say the functions f and g are orthogonal when the integral a f(x)g(x) dx equals zero.] R π 1 2 π Example. sin x and cos x are orthogonal on −π < x < π, since −π sin x cos x dx = 2 sin x −π = 0. 2. Integration Lemma Suppose functions Xn(x) and Xm(x) satisfy the differential equations 00 Xn + λnXn = 0; a < x < b; 00 Xm + λmXm = 0; a < x < b; for some numbers λn; λm. Then Z b 0 0 b (λn − λm) Xn(x)Xm(x) dx = [Xn(x)Xm(x) − Xn(x)Xm(x)]a: a Proof.
    [Show full text]
  • Inner Products and Orthogonality
    Advanced Linear Algebra – Week 5 Inner Products and Orthogonality This week we will learn about: • Inner products (and the dot product again), • The norm induced by the inner product, • The Cauchy–Schwarz and triangle inequalities, and • Orthogonality. Extra reading and watching: • Sections 1.3.4 and 1.4.1 in the textbook • Lecture videos 17, 18, 19, 20, 21, and 22 on YouTube • Inner product space at Wikipedia • Cauchy–Schwarz inequality at Wikipedia • Gram–Schmidt process at Wikipedia Extra textbook problems: ? 1.3.3, 1.3.4, 1.4.1 ?? 1.3.9, 1.3.10, 1.3.12, 1.3.13, 1.4.2, 1.4.5(a,d) ??? 1.3.11, 1.3.14, 1.3.15, 1.3.25, 1.4.16 A 1.3.18 1 Advanced Linear Algebra – Week 5 2 There are many times when we would like to be able to talk about the angle between vectors in a vector space V, and in particular orthogonality of vectors, just like we did in Rn in the previous course. This requires us to have a generalization of the dot product to arbitrary vector spaces. Definition 5.1 — Inner Product Suppose that F = R or F = C, and V is a vector space over F. Then an inner product on V is a function h·, ·i : V × V → F such that the following three properties hold for all c ∈ F and all v, w, x ∈ V: a) hv, wi = hw, vi (conjugate symmetry) b) hv, w + cxi = hv, wi + chv, xi (linearity in 2nd entry) c) hv, vi ≥ 0, with equality if and only if v = 0.
    [Show full text]
  • Math 22 – Linear Algebra and Its Applications
    Math 22 – Linear Algebra and its applications - Lecture 25 - Instructor: Bjoern Muetzel GENERAL INFORMATION ▪ Office hours: Tu 1-3 pm, Th, Sun 2-4 pm in KH 229 Tutorial: Tu, Th, Sun 7-9 pm in KH 105 ▪ Homework 8: due Wednesday at 4 pm outside KH 008. There is only Section B,C and D. 5 Eigenvalues and Eigenvectors 5.1 EIGENVECTORS AND EIGENVALUES Summary: Given a linear transformation 푇: ℝ푛 → ℝ푛, then there is always a good basis on which the transformation has a very simple form. To find this basis we have to find the eigenvalues of T. GEOMETRIC INTERPRETATION 5 −3 1 1 Example: Let 퐴 = and let 푢 = 푥 = and 푣 = . −6 2 0 2 −1 1.) Find A푣 and Au. Draw a picture of 푣 and A푣 and 푢 and A푢. 2.) Find A(3푢 +2푣) and 퐴2 (3푢 +2푣). Hint: Use part 1.) EIGENVECTORS AND EIGENVALUES ▪ Definition: An eigenvector of an 푛 × 푛 matrix A is a nonzero vector x such that 퐴푥 = 휆푥 for some scalar λ in ℝ. In this case λ is called an eigenvalue and the solution x≠ ퟎ is called an eigenvector corresponding to λ. ▪ Definition: Let A be an 푛 × 푛 matrix. The set of solutions 푛 Eig(A, λ) = {x in ℝ , such that (퐴 − 휆퐼푛)x = 0} is called the eigenspace Eig(A, λ) of A corresponding to λ. It is the null space of the matrix 퐴 − 휆퐼푛: Eig(A, λ) = Nul(퐴 − 휆퐼푛) Slide 5.1- 7 EIGENVECTORS AND EIGENVALUES 16 Example: Show that 휆 =7 is an eigenvalue of matrix A = 52 and find the corresponding eigenspace Eig(A,7).
    [Show full text]
  • Eigenvalues, Eigenvectors and the Similarity Transformation
    Eigenvalues, Eigenvectors and the Similarity Transformation Eigenvalues and the associated eigenvectors are ‘special’ properties of square matrices. While the eigenvalues parameterize the dynamical properties of the system (timescales, resonance properties, amplification factors, etc) the eigenvectors define the vector coordinates of the normal modes of the system. Each eigenvector is associated with a particular eigenvalue. The general state of the system can be expressed as a linear combination of eigenvectors. The beauty of eigenvectors is that (for square symmetric matrices) they can be made orthogonal (decoupled from one another). The normal modes can be handled independently and an orthogonal expansion of the system is possible. The decoupling is also apparent in the ability of the eigenvectors to diagonalize the original matrix, A, with the eigenvalues lying on the diagonal of the new matrix, . In analogy to the inertia tensor in mechanics, the eigenvectors form the principle axes of the solid object and a similarity transformation rotates the coordinate system into alignment with the principle axes. Motion along the principle axes is decoupled. The matrix mechanics is closely related to the more general singular value decomposition. We will use the basis sets of orthogonal eigenvectors generated by SVD for orbit control problems. Here we develop eigenvector theory since it is more familiar to most readers. Square matrices have an eigenvalue/eigenvector equation with solutions that are the eigenvectors xand the associated eigenvalues : Ax = x The special property of an eigenvector is that it transforms into a scaled version of itself under the operation of A. Note that the eigenvector equation is non-linear in both the eigenvalue () and the eigenvector (x).
    [Show full text]
  • Different Forms of Linear Systems, Linear Combinations, and Span
    Math 20F, 2015SS1 / TA: Jor-el Briones / Sec: A01 / Handout 2 Page 1 of2 Different forms of Linear Systems, Linear Combinations, and Span (1.3-1.4) Terms you should know Linear combination (and weights): A vector y is called a linear combination of vectors v1; v2; :::; vk if given some numbers c1; c2; :::; ck, y = c1v1 + c2v2 + ::: + ckvk The numbers c1; c2; :::; ck are called weights. Span: We call the set of ALL the possible linear combinations of a set of vectors to be the span of those vectors. For example, the span of v1 and v2 is written as span(v1; v2) NOTE: The zero vector is in the span of ANY set of vectors in Rn. Rn: The set of all vectors with n entries 3 ways to write a linear system with m equations and n unknowns If A is the m×n coefficient matrix corresponding to a linear system, with columns a1; a2; :::; an, and b in Rm is the constant vector corresponding to that linear system, we may represent the linear system using 1. A matrix equation: Ax = b 2. A vector equation: x1a1 + x2a2 + ::: + xnan = b, where x1; x2; :::xn are numbers. h i 3. An augmented matrix: A j b NOTE: You should know how to multiply a matrix by a column vector, and that doing so would result in some linear combination of the columns of that matrix. Math 20F, 2015SS1 / TA: Jor-el Briones / Sec: A01 / Handout 2 Page 2 of2 Important theorems to know: Theorem. (Chapter 1, Theorem 3) If A is an m × n matrix, with columns a1; a2; :::; an and b is in Rm, the matrix equation Ax = b has the same solution set as the vector equation x1a1 + x2a2 + ::: + xnan = b as well as the system of linear equations whose augmented matrix is h i A j b Theorem.
    [Show full text]
  • A Guided Tour to the Plane-Based Geometric Algebra PGA
    A Guided Tour to the Plane-Based Geometric Algebra PGA Leo Dorst University of Amsterdam Version 1.15{ July 6, 2020 Planes are the primitive elements for the constructions of objects and oper- ators in Euclidean geometry. Triangulated meshes are built from them, and reflections in multiple planes are a mathematically pure way to construct Euclidean motions. A geometric algebra based on planes is therefore a natural choice to unify objects and operators for Euclidean geometry. The usual claims of `com- pleteness' of the GA approach leads us to hope that it might contain, in a single framework, all representations ever designed for Euclidean geometry - including normal vectors, directions as points at infinity, Pl¨ucker coordinates for lines, quaternions as 3D rotations around the origin, and dual quaternions for rigid body motions; and even spinors. This text provides a guided tour to this algebra of planes PGA. It indeed shows how all such computationally efficient methods are incorporated and related. We will see how the PGA elements naturally group into blocks of four coordinates in an implementation, and how this more complete under- standing of the embedding suggests some handy choices to avoid extraneous computations. In the unified PGA framework, one never switches between efficient representations for subtasks, and this obviously saves any time spent on data conversions. Relative to other treatments of PGA, this text is rather light on the mathematics. Where you see careful derivations, they involve the aspects of orientation and magnitude. These features have been neglected by authors focussing on the mathematical beauty of the projective nature of the algebra.
    [Show full text]