Lecture 14: Examples of Change of Basis and Matrix Transformations

Total Page:16

File Type:pdf, Size:1020Kb

Lecture 14: Examples of Change of Basis and Matrix Transformations LECTURE 14: EXAMPLES OF CHANGE OF BASIS AND MATRIX TRANSFORMATIONS. QUADRATIC FORMS. Prof. N. Harnew University of Oxford MT 2012 1 Outline: 14. EXAMPLES OF CHANGE OF BASIS AND MATRIX TRANSFORMATIONS. QUADRATIC FORMS. 14.1 Examples of change of basis 14.1.1 Representation of a 2D vector in a rotated coordinate frame 14.1.2 Rotation of a coordinate system in 2D 14.2 Rotation of a vector in fixed 3D coord. system 14.2.1 Example 1 14.2.2 Example 2 14.3 MATRICES AND QUADRATIC FORMS 14.3.1 Example 1: a 2 × 2 quadratic form 14.3.2 Example 2: another 2 × 2 quadratic form 14.3.3 Example 3: a 3 × 3 quadratic form 2 14.1 Examples of change of basis 14.1.1 Representation of a 2D vector in a rotated coordinate frame I Transformation of vector r from Cartesian axes (x; y) into frame (x 0; y 0), rotated by angle θ 3 0 x0 = r cos α y = r sin α x = r cos(θ + α) y = r sin(θ + α) 0 x cos α ! y 0 = y sin α ! x = cos(θ+α) sin(θ+α) 0 0 x cos α = x0 cos θ cos α − x0 sin θ sin α y sin α = y sin θ cos α + y cos θ sin α 0 0 Since x0 sin α = y 0 cos α Since y cos α = x sin α 0 0 x = x0 cos θ −y 0 sin θ y = x sin θ +y cos θ I Coordinate transformation: 9 These equations 9 x cos θ − sin θ x0 = (1) > > y sin θ cos θ y 0 > relate the coordinates > => of r measured in the => I Take the inverse: (x; y) frame with those > > x0 cos θ sin θ x > measured in the rotated > 0 = (2) ;> ;> y − sin θ cos θ y (x 0; y 0) frame 4 14.1.2 Rotation of a coordinate system in 2D I Start from the familiar orthonormal basis 1 0 je i = (≡ x^); je i = (≡ y^) (3) 1 0 2 1 I Transform the basis via a rotation through an angle θ cos θ − sin θ New basis : je0 i = (≡ x^0); je0 i = (≡ y^0) 1 sin θ 2 cos θ (4) 5 0 I The transformation matrix S is determined from jeii = Sjeii S S 1 cos θ S = cos θ 11 12 = ) 11 (5) S21 S22 0 sin θ S21 = sin θ S S 0 − sin θ S = − sin θ 11 12 = ) 12 (6) S21 S22 1 cos θ S22 = cos θ cos θ − sin θ Hence S(θ) = (7) sin θ cos θ As expected, the basis transformation matrix jei ! je0i is the inverse of the transformation (x; y) ! (x 0; y 0) of the components derived in the previous sub-section. I The inverse transformation matrix rotates backwards cos θ sin θ S−1(θ) = ≡ S(−θ) (8) − sin θ cos θ I It is easy to show via substitution that two successive rotations S(θ)S(α) = S(α)S(θ) = S(θ + α) 6 14.2 Rotation of a vector in fixed 3D coord. system I In 3D, we can rotate a vector r about any one of the three axes r0 = R(θ) r A rotation about the z axis is given by 0 cos θ − sin θ 0 1 Rz (θ) = @ sin θ cos θ 0 A (9) 0 0 1 Note that rotations of a vector in a fixed coordinate system transform in the same way as rotations of the base vectors (see previous section). I For rotations about the x and y axes 0 1 0 0 1 0 cos γ 0 sin γ 1 Rx (α) = @ 0 cos α − sin α A ; Ry (γ) = @ 0 1 0 A 0 sin α cos α − sin γ 0 cos γ (10) I But note that now for successive rotations: Rz (θ)Rx (α) 6= Rx (α)Rz (θ) 7 14.2.1 Example 1 Rotate the unit vector 0 cos θ − sin θ 0 1 (1; 0; 0) by 90◦ about Rz (θ) = @ sin θ cos θ 0 A the z-axis 0 0 1 0 0 −1 0 1 0 1 1 0 0 1 ! @ 1 0 0 A @ 0 A = @ 1 A 0 0 1 0 0 ! Which is a unit vector along the y-axis (as expected). nd ◦ I Now make a 2 rotation of 90 about the x-axis: 0 1 0 0 1 0 0 −1 0 1 0 1 1 0 0 1 ◦ ◦ Rx (90 )Rz (90 ) = @ 0 0 −1 A @ 1 0 0 A @ 0 A = @ 0 A 0 1 0 0 0 1 0 1 So ... by now the procedure of matrix multiplication should be clear: the exact form of the row/column multiplication is necessary to make a linear transformation between two bases. It is also the required 8form for rotations of vectors in their associated vector space(s). 14.2.2 Example 2 ◦ I Rotate the vector r = (1; 2; 3) by 30 about the y axis. p sin 30◦ = 1=2; cos 30◦ = 3=2 I The rotation matrix is p 0 cos γ 0 sin γ 1 0 3=2 0 1=2 1 Ry (γ) = @ 0 1 0 A = @ 0 1p 0 A (11) − sin γ 0 cos γ −1=2 0 3=2 p p 0 x0 1 0 3=2 0 1=2 1 0 1 1 0 3=2 + 3=2 1 0 @ y A = @ 0 1p 0 A @ 2 A = @ 2 p A (12) z0 −1=2 0 3=2 3 −1=2 + 3 3=2 I As a check - rotate back ! use inverse matrix p p 0 x 1 0 3=2 0 −1=2 1 0 3=2 + 3=2 1 @ y A = @ 0 1p 0 A @ 2 p A (13) z 1=2 0 3=2 −1=2 + 3 3=2 p p 0 3=4 + 3 3=4 + 1=4 − 3 3=4 1 0 1 1 = @ p 2 p A = @ 2 A as required: (14) 3=4 + 3=4 − 3=4 + 9=4 3 9 14.3 MATRICES AND QUADRATIC FORMS Best illustrated by a few examples. 14.3.1 Example 1: a 2 × 2 quadratic form 2 2 T I Represent equation x + y = 1 in matrix form X AX = 1 I Matrix A is a transformation matrix which represents the conic form of the equation. x 1 0 x x2 + y 2 = (x; y) = (x; y) = 1 y 0 1 y (15) 10 14.3.2 Example 2: another 2 × 2 quadratic form I Matrix representation: 5 2 x (x; y) = 5 (16) 2 3 y I Write in equation form: 5x + 2y = (x; y) (17) 2x + 3y = 5x2 + (2xy + 2xy) + 3y 2 = 5x2 + 4xy + 3y 2 ! 5x2 + 4xy + 3y 2 = 5 11 14.3.3 Example 3: a 3 × 3 quadratic form 2 2 2 I Represent x + y − 3z + 2xy + 6xz − 6yz = 4 in the matrix form (X T AX). I Write 0 a b c 1 0 x 1 0 ax + by + cz 1 (x; y; z) @ d e f A @ y A = (x; y; z) @ dx + ey + fz A (18) g h i z gx + hy + iz = ax 2 + bxy + cxz + dxy + ey 2 + fyz + gxz + hyz + iz2 I Comparing terms: [x 2] ! a = 1;[y 2] ! e = 1;[z2] ! i = −3 [xy] ! b + d = 2;[xz] ! c + g = 6;[yz] ! f + h = −6 (underconstrained) 0 1 I In echelon form: 1 0 0 T set b = c = f = 0 X @ 2 1 0 A X = 4 (19) 6 −6 −3 I In symmetrical form: 0 1 1 3 1 T set b = d, c = g, f = h X @ 1 1 −3 A X = 4 (20) 3 −3 −3 12.
Recommended publications
  • MATH 2030: MATRICES Introduction to Linear Transformations We Have
    MATH 2030: MATRICES Introduction to Linear Transformations We have seen that we may describe matrices as symbol with simple algebraic properties like matrix multiplication, addition and scalar addition. In the particular case of matrix-vector multiplication, i.e., Ax = b where A is an m × n matrix and x; b are n×1 matrices (column vectors) we may represent this as a transformation on the space of column vectors, that is a function F (x) = b , where x is the independent variable and b the dependent variable. In this section we will give a more rigorous description of this idea and provide examples of such matrix transformations, which will lead to the idea of a linear transformation. To begin we look at a matrix-vector multiplication to give an idea of what sort of functions we are working with 21 0 3 1 A = 2 −1 ; v = : 4 5 −1 3 4 then matrix-vector multiplication yields 2 1 3 Av = 4 3 5 −1 We have taken a 2 × 1 matrix and produced a 3 × 1 matrix. More generally for any x we may describe this transformation as a matrix equation y 21 0 3 2 x 3 x 2 −1 = 2x − y : 4 5 y 4 5 3 4 3x + 4y From this product we have found a formula describing how A transforms an arbi- 2 3 trary vector in R into a new vector in R . Expressing this as a transformation TA we have 2 x 3 x T = 2x − y : A y 4 5 3x + 4y From this example we can define some helpful terminology.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Vectors, Matrices and Coordinate Transformations
    S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between them, physical laws can often be written in a simple form. Since we will making extensive use of vectors in Dynamics, we will summarize some of their important properties. Vectors For our purposes we will think of a vector as a mathematical representation of a physical entity which has both magnitude and direction in a 3D space. Examples of physical vectors are forces, moments, and velocities. Geometrically, a vector can be represented as arrows. The length of the arrow represents its magnitude. Unless indicated otherwise, we shall assume that parallel translation does not change a vector, and we shall call the vectors satisfying this property, free vectors. Thus, two vectors are equal if and only if they are parallel, point in the same direction, and have equal length. Vectors are usually typed in boldface and scalar quantities appear in lightface italic type, e.g. the vector quantity A has magnitude, or modulus, A = |A|. In handwritten text, vectors are often expressed using the −→ arrow, or underbar notation, e.g. A , A. Vector Algebra Here, we introduce a few useful operations which are defined for free vectors. Multiplication by a scalar If we multiply a vector A by a scalar α, the result is a vector B = αA, which has magnitude B = |α|A. The vector B, is parallel to A and points in the same direction if α > 0.
    [Show full text]
  • Support Graph Preconditioners for Sparse Linear Systems
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Texas A&M University SUPPORT GRAPH PRECONDITIONERS FOR SPARSE LINEAR SYSTEMS A Thesis by RADHIKA GUPTA Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE December 2004 Major Subject: Computer Science SUPPORT GRAPH PRECONDITIONERS FOR SPARSE LINEAR SYSTEMS A Thesis by RADHIKA GUPTA Submitted to Texas A&M University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE Approved as to style and content by: Vivek Sarin Paul Nelson (Chair of Committee) (Member) N. K. Anand Valerie E. Taylor (Member) (Head of Department) December 2004 Major Subject: Computer Science iii ABSTRACT Support Graph Preconditioners for Sparse Linear Systems. (December 2004) Radhika Gupta, B.E., Indian Institute of Technology, Bombay; M.S., Georgia Institute of Technology, Atlanta Chair of Advisory Committee: Dr. Vivek Sarin Elliptic partial differential equations that are used to model physical phenomena give rise to large sparse linear systems. Such systems can be symmetric positive definite and can be solved by the preconditioned conjugate gradients method. In this thesis, we develop support graph preconditioners for symmetric positive definite matrices that arise from the finite element discretization of elliptic partial differential equations. An object oriented code is developed for the construction, integration and application of these preconditioners. Experimental results show that the advantages of support graph preconditioners are retained in the proposed extension to the finite element matrices. iv To my parents v ACKNOWLEDGMENTS I would like to express sincere thanks to my advisor, Dr.
    [Show full text]
  • Orthonormal Bases in Hilbert Space APPM 5440 Fall 2017 Applied Analysis
    Orthonormal Bases in Hilbert Space APPM 5440 Fall 2017 Applied Analysis Stephen Becker November 18, 2017(v4); v1 Nov 17 2014 and v2 Jan 21 2016 Supplementary notes to our textbook (Hunter and Nachtergaele). These notes generally follow Kreyszig’s book. The reason for these notes is that this is a simpler treatment that is easier to follow; the simplicity is because we generally do not consider uncountable nets, but rather only sequences (which are countable). I have tried to identify the theorems from both books; theorems/lemmas with numbers like 3.3-7 are from Kreyszig. Note: there are two versions of these notes, with and without proofs. The version without proofs will be distributed to the class initially, and the version with proofs will be distributed later (after any potential homework assignments, since some of the proofs maybe assigned as homework). This version does not have proofs. 1 Basic Definitions NOTE: Kreyszig defines inner products to be linear in the first term, and conjugate linear in the second term, so hαx, yi = αhx, yi = hx, αy¯ i. In contrast, Hunter and Nachtergaele define the inner product to be linear in the second term, and conjugate linear in the first term, so hαx,¯ yi = αhx, yi = hx, αyi. We will follow the convention of Hunter and Nachtergaele, so I have re-written the theorems from Kreyszig accordingly whenever the order is important. Of course when working with the real field, order is completely unimportant. Let X be an inner product space. Then we can define a norm on X by kxk = phx, xi, x ∈ X.
    [Show full text]
  • Lecture 4: April 8, 2021 1 Orthogonality and Orthonormality
    Mathematical Toolkit Spring 2021 Lecture 4: April 8, 2021 Lecturer: Avrim Blum (notes based on notes from Madhur Tulsiani) 1 Orthogonality and orthonormality Definition 1.1 Two vectors u, v in an inner product space are said to be orthogonal if hu, vi = 0. A set of vectors S ⊆ V is said to consist of mutually orthogonal vectors if hu, vi = 0 for all u 6= v, u, v 2 S. A set of S ⊆ V is said to be orthonormal if hu, vi = 0 for all u 6= v, u, v 2 S and kuk = 1 for all u 2 S. Proposition 1.2 A set S ⊆ V n f0V g consisting of mutually orthogonal vectors is linearly inde- pendent. Proposition 1.3 (Gram-Schmidt orthogonalization) Given a finite set fv1,..., vng of linearly independent vectors, there exists a set of orthonormal vectors fw1,..., wng such that Span (fw1,..., wng) = Span (fv1,..., vng) . Proof: By induction. The case with one vector is trivial. Given the statement for k vectors and orthonormal fw1,..., wkg such that Span (fw1,..., wkg) = Span (fv1,..., vkg) , define k u + u = v − hw , v i · w and w = k 1 . k+1 k+1 ∑ i k+1 i k+1 k k i=1 uk+1 We can now check that the set fw1,..., wk+1g satisfies the required conditions. Unit length is clear, so let’s check orthogonality: k uk+1, wj = vk+1, wj − ∑ hwi, vk+1i · wi, wj = vk+1, wj − wj, vk+1 = 0. i=1 Corollary 1.4 Every finite dimensional inner product space has an orthonormal basis.
    [Show full text]
  • Ch 5: ORTHOGONALITY
    Ch 5: ORTHOGONALITY 5.5 Orthonormal Sets 1. a set fv1; v2; : : : ; vng is an orthogonal set if hvi; vji = 0, 81 ≤ i 6= j; ≤ n (note that orthogonality only makes sense in an inner product space since you need to inner product to check hvi; vji = 0. n For example fe1; e2; : : : ; eng is an orthogonal set in R . 2. fv1; v2; : : : ; vng is an orthogonal set =) v1; v2; : : : ; vn are linearly independent. 3. a set fv1; v2; : : : ; vng is an orthonormal set if hvi; vji = 0 and jjvijj = 1, 81 ≤ i 6= j; ≤ n (i.e. orthogonal and unit length). n For example fe1; e2; : : : ; eng is an orthonormal set in R . 4. given an orthogonal basis for a vector space V , we can always find an orthonormal basis for V by dividing each vector by its length (see Example 2 and 3 page 256) n 5. a space with an orthonormal basis behaves like the euclidean space R with the standard basis (it is easier to work with basis that are orthonormal, just like it is n easier to work with the standard basis versus other bases for R ) 6. if v is a vector in a inner product space V with fu1; u2; : : : ; ung an orthonormal basis, Pn Pn then we can write v = i=1 ciui = i=1hv; uiiui Pn 7. an easy way to find the inner product of two vectors: if u = i=1 aiui and v = Pn i=1 biui, where fu1; u2; : : : ; ung is an orthonormal basis for an inner product space Pn V , then hu; vi = i=1 aibi Pn 8.
    [Show full text]
  • Math 2331 – Linear Algebra 6.2 Orthogonal Sets
    6.2 Orthogonal Sets Math 2331 { Linear Algebra 6.2 Orthogonal Sets Jiwen He Department of Mathematics, University of Houston [email protected] math.uh.edu/∼jiwenhe/math2331 Jiwen He, University of Houston Math 2331, Linear Algebra 1 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix 6.2 Orthogonal Sets Orthogonal Sets: Examples Orthogonal Sets: Theorem Orthogonal Basis: Examples Orthogonal Basis: Theorem Orthogonal Projections Orthonormal Sets Orthonormal Matrix: Examples Orthonormal Matrix: Theorems Jiwen He, University of Houston Math 2331, Linear Algebra 2 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix Orthogonal Sets Orthogonal Sets n A set of vectors fu1; u2;:::; upg in R is called an orthogonal set if ui · uj = 0 whenever i 6= j. Example 82 3 2 3 2 39 < 1 1 0 = Is 4 −1 5 ; 4 1 5 ; 4 0 5 an orthogonal set? : 0 0 1 ; Solution: Label the vectors u1; u2; and u3 respectively. Then u1 · u2 = u1 · u3 = u2 · u3 = Therefore, fu1; u2; u3g is an orthogonal set. Jiwen He, University of Houston Math 2331, Linear Algebra 3 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix Orthogonal Sets: Theorem Theorem (4) Suppose S = fu1; u2;:::; upg is an orthogonal set of nonzero n vectors in R and W =spanfu1; u2;:::; upg. Then S is a linearly independent set and is therefore a basis for W . Partial Proof: Suppose c1u1 + c2u2 + ··· + cpup = 0 (c1u1 + c2u2 + ··· + cpup) · = 0· (c1u1) · u1 + (c2u2) · u1 + ··· + (cpup) · u1 = 0 c1 (u1 · u1) + c2 (u2 · u1) + ··· + cp (up · u1) = 0 c1 (u1 · u1) = 0 Since u1 6= 0, u1 · u1 > 0 which means c1 = : In a similar manner, c2,:::,cp can be shown to by all 0.
    [Show full text]
  • Wronskian Solutions to the Kdv Equation Via B\" Acklund
    Wronskian solutions to the KdV equation via B¨acklund transformation Qi-fei Xuan∗, Mei-ying Ou, Da-jun Zhang† Department of Mathematics, Shanghai University, Shanghai 200444, P.R. China October 27, 2018 Abstract In the paper we discuss the B¨acklund transformation of the KdV equation between solitons and solitons, between negatons and negatons, between positons and positons, between rational solution and rational solution, and between complexitons and complexitons. We investigate the conditions that Wronskian entries satisfy for the bilinear B¨acklund transformation of the KdV equation. By choosing suitable Wronskian entries and the parameter in the bilinear B¨acklund transformation, we obtain transformations between many kinds of solutions. Keywords: the KdV equation, Wronskian solution, bilinear form, B¨acklund transformation 1 Introduction The Wronskian can be considered as a bridge connecting with many classical methods in soliton theory. This is not only because soliton solutions in Wronskian form can be obtained from the Darboux transformation[1], Sato theory[2, 3] and Wronskian technique[4]-[10], but also because the exponential polynomial for N-solitons derived from Hirota method[11, 12] and the matrix form given by the Inverse Scattering Transform[13, 14] can be transformed into a Wronskian by extracting some exponential factors. The special structure of a Wronskian contributes simple arXiv:0706.3487v1 [nlin.SI] 24 Jun 2007 forms of its derivatives, and this admits solution verification by direct substituting Wronskians into a bilinear soliton equation or a bilinear B¨acklund transformation(BT). This approach is re- ferred to as Wronskian technique[4]. In the approach a bilinear soliton equation is some algebraic identity provided that Wronskian entry vector satisfies some differential equation set which we call Wronskian condition.
    [Show full text]
  • Rotation Matrix - Wikipedia, the Free Encyclopedia Page 1 of 22
    Rotation matrix - Wikipedia, the free encyclopedia Page 1 of 22 Rotation matrix From Wikipedia, the free encyclopedia In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix rotates points in the xy -Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details). In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometry, physics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space. Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, i.e. an orthogonal matrix whose determinant is 1: . The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1. Contents 1 Rotations in two dimensions 1.1 Non-standard orientation
    [Show full text]
  • True False Questions from 4,5,6
    (c)2015 UM Math Dept licensed under a Creative Commons By-NC-SA 4.0 International License. Math 217: True False Practice Professor Karen Smith 1. For every 2 × 2 matrix A, there exists a 2 × 2 matrix B such that det(A + B) 6= det A + det B. Solution note: False! Take A to be the zero matrix for a counterexample. 2. If the change of basis matrix SA!B = ~e4 ~e3 ~e2 ~e1 , then the elements of A are the same as the element of B, but in a different order. Solution note: True. The matrix tells us that the first element of A is the fourth element of B, the second element of basis A is the third element of B, the third element of basis A is the second element of B, and the the fourth element of basis A is the first element of B. 6 6 3. Every linear transformation R ! R is an isomorphism. Solution note: False. The zero map is a a counter example. 4. If matrix B is obtained from A by swapping two rows and det A < det B, then A is invertible. Solution note: True: we only need to know det A 6= 0. But if det A = 0, then det B = − det A = 0, so det A < det B does not hold. 5. If two n × n matrices have the same determinant, they are similar. Solution note: False: The only matrix similar to the zero matrix is itself. But many 1 0 matrices besides the zero matrix have determinant zero, such as : 0 0 6.
    [Show full text]
  • C Self-Adjoint Operators and Complete Orthonormal Bases
    C Self-adjoint operators and complete orthonormal bases The concept of the adjoint of an operator plays a very important role in many aspects of linear algebra and functional analysis. Here we will primar­ ily focus on s e ll~adjoint operators and how they can be used to obtain and characterize complete orthonormal sets of vectors in a Hilbert space. The main fact we will present here is that self-adjoint operators typically have orthogonal eigenvectors, and under some conditions, these eigenvectors can form a complete orthonormal basis for the Hilbert space. In particular, we will use this technique to find a variety of orthonormal bases for L2 [a, b] that satisfy different types of boundary conditions. We begin with the algebraic aspects of self-adjoint operators and their eigenvectors. Those algebraic properties are identical in the finite and infinite dimensional cases. The completeness arguments in infinite dimensions are more delicate, we will only state those results without complete proofs. We begin first by defining the adjoint. D efinition C.l. (Finite dimensional or bounded operator case) Let A H I -; H2 be a bounded operator between the Hilbert spaces HI and H 2. The adjomt A' : H2 ---; HI of Xis defined by the req1Lirement J (V2, AVI ) ~ = (A''V2, v])], (C.l) fOT all VI E HI--- and'V2 E H2· Note that for emphasis, we have written (. , .)] and (. , ')2 for the inner products in HI and H2 respectively. It remains to be shown that the relation (C.l) defines an operator A' from A uniquely. We will not do this here in general but will illustrate it with several examples.
    [Show full text]