Unitary Matrices

Total Page:16

File Type:pdf, Size:1020Kb

Unitary Matrices Chapter 4 Unitary Matrices 4.1 Basics This chapter considers a very important class of matrices that are quite use- ful in proving a number of structure theorems about all matrices. Called unitary matrices, they comprise a class of matrices that have the remarkable properties that as transformations they preserve length, and preserve the an- gle between vectors. This is of course true for the identity transformation. Therefore it is helpful to regard unitary matrices as “generalized identities,” though we will see that they form quite a large class. An important exam- ple of these matrices, the rotations, have already been considered. In this chapter, the underlying field is usually C, the underlying vector space is Cn, and almost without exception the underlying norm is 2.Webeginby recalling a few important facts. k·k Recall that a set of vectors x1,...,xk Cn is called orthogonal if ∈ x xm = xm,xj =0for1 j = m k.Thesetisorthonormal if j∗ h i ≤ 6 ≤ 1 j = m x x = δ = j∗ m mj 0 j = m. ½ 6 An orthogonal set of vectors can be made orthonormal by scaling: 1 x x . j 1/2 j −→ (xj∗xj) Theorem 4.1.1. Every set of orthonormal vectors is linearly independent. Proof. The proof is routine, using a common technique. Suppose S = k xj is orthonormal and linearly dependent. Then, without loss of gen- { }j=1 157 158 CHAPTER 4. UNITARY MATRICES erality (by relabeling if needed), we can assume k 1 − uk = cjuj. j=1 X Compute uk∗uk as k 1 − 1=u∗uk = u∗ cjuj k k j=1 X k 1 − = cjuk∗uj =0. j=1 X This contradiction proves the result. Corollary 4.1.1. If S = u1 ...uk Cn is orthonormal then k n. { } ⊂ ≤ Corollary 4.1.2. Every k-dimensional subspace of Cn has an orthonormal basis. Proof. Apply the Gram—Schmidt process to any basis to orthonormalize it. Definition 4.1.1. AmatrixU Mn is said to be unitary if U ∗U = I.[If T ∈ U Mn(R)andU U = I,thenU is called real orthogonal.] ∈ Note: A linear transformation T : Cn Cn is called an isometry if → Tx = x for all x Cn. k k k k ∈ Proposition 4.1.1. Suppose that U Mn is unitary. (i) Then the columns ∈ of U form an orthonormal basis of Cn,orRn ,ifU is real. (ii) The spectrum σ(u) z z =1 . (iii) det U =1. ⊂ { || | } | | Proof. The proof of (i) is a consequence of the definition. To prove (ii), first denote the columns of U by ui,i=1,...,n.Ifλ is an eigenvalue of U with 2 1/2 pertaining eigenvector x,then Ux = xiui =( xi ) = x = k k k k | | k k λ x .Henceλ = 1. Finally, (iii) follows directly because det U = λi. | |k k | | P P Thus det U = λi =1. | | | | Q This importantQ result is just one of many equivalent results about unitary matrices. In the result below, a number of equivalences are established. Theorem 4.1.2. Let U Mn. The following are equivalent. ∈ 4.1. BASICS 159 (a) U is unitary. 1 (b) U is nonsingular and U ∗ = U − . (c) UU∗ = I. (d) U ∗ is unitary. (e) The columns of U form an orthonormal set. (f) The rows of U form an orthonormal set. (g) U is an isometry. (h) U carries every set of orthonormal vectors to a set of orthonormal vectors. Proof. (a) (b) follows from the definition of unitary and the fact that the inverse⇒ is unique. (b) (c) follows from the fact that a left inverse is also a right inverse. ⇒ (a) (d) UU =(U ) U = I. ⇒ ∗ ∗ ∗ ∗ (d) (e) (e) (b) uj∗ukδjk,whereu1 ...un are the columns of U. Similarly⇒ (b) ≡(e). ⇒ ⇒ (d) (f) same reasoning. ≡ (e) (g) We know the columns of U are orthonormal. Denoting the columns ⇒ by u1 ...un,wehave n Ux = xiui 1 X T where x =(x1,... ,xn) . It is an easy matter to see that n 2 2 2 Ux = xi = x . k k | | k k 1 X (g) (e). Consider x = ej.ThenUx = uj.Hence1=ej = Uej = ⇒ k k k k uj .ThecolumnsofU have norm one. Now let x = αei + βej be chosen k k 160 CHAPTER 4. UNITARY MATRICES 2 2 such that x = αei + βej = α + β =1. Then k k k k | | | | q 1= Ux 2 k k 2 = U (αei + βej) k k = U (αei + βej) ,U(αei + βej) h 2 2 i = α Uei,Uei + β Uej,Uej + αβ¯ Uei,Uej +¯αβ Uej,Uei | |2 h i 2| | h i h i h i = α ui,ui + β uj,uj + αβ¯ ui,uj +¯αβ uj,ui | |2 h 2i | | h i h i h i = α + β +2 αβ¯ ui,uj | | | | < h i =1+2 αβ¯ ui,uj < h ¡ i ¢ Thus αβ¯ ui,u¡ j =0. ¢Now suppose ui,uj = s + it. Selecting α = 1< h i h i 1 β = we obtain that ui,uj = 0, and selecting α = iβ = we obtain √2 ¡ ¢ <h i √2 that ui,uj =0. Thus ui,uj = 0. Since the coordinates i and j are arbitrary,=h it followsi that theh columnsi of U are orthogonal. (g) (h) Suppose v1,...,vn is orthogonal. For any two of them U(vj + ⇒2 2 { } k vk) = vj + vk . Hence Uvj,Uvk =0. k k k h i (h) (e) The orthormal set of the standard unit vectors ej,j=1,...,n ⇒ th is carried to the columns of U.ThatisUej = uj,thej column of U. Therefore the columns of U are orthonormal. Corollary 4.1.3. If U Mn(C) is unitary, then the transformation defined by U preserves angles. ∈ Proof. We have for any vectors x, y Cn that the angle θ is completely ∈ x,y determined from the inner product via cos θ = hx yi .SinceU is unitary (and thus an isometry) it follows that k kk k Ux,Uy = U ∗Ux,y = x, y h i h i h i This proves the result. cos θ sin θ Example 4.1.1. Let T (θ)= sin θ −cos θ where θ is any real. Then T (θ)is real orthogonal. £ ¤ Proposition 4.1.1. If U M2(R) is real orthogonal, then U has the form T (θ) for some θ or the form∈ 10 cos θ sin θ U = T (θ)= 0 1 sin θ cos θ · − ¸ · − ¸ Finally, we can easily establish the diagonalizability of unitary matrices. 4.1. BASICS 161 Theorem 4.1.3. If U Mn is unitary, then it is diagonalizable. ∈ Proof. To prove this we need to revisit the proof of Theorem 3.5.2. As before, select the first vector to be a normalized eigenvector u1 pertaining to λ1. Now choose the remaining vectors to be orthonormal to u1.This makes the matrix P1 with all these vectors as columns a unitary matrix. 1 Therefore B1 = P − UP is also unitary. However it has the form λα1 ... αn 1 0 − B = where A is (n 1) (n 1) 1 . 2 . A2 − × − 0 Since it is unitary, it must have orthogonal columns by Theorem 4.1.2. It follows then that α1 = α2 = = αn =0 and ··· λ 0 ... 0 0 B = 1 . A2 0 At this point one may apply and inductive hypothesis to conclude that A2 is similar to a diagonal matrix. Thus by the manner in which the full similarity was constructed, we see that A must also be similar to a diagonal matrix. Corollary 4.1.1. Let U Mn be unitary. Then ∈ (i) Then U has a set of n orthogonal eigenvectors. (ii) Let λ1,...,λn and v1,...,vn denote respectively the eigenvalues and{ their pertaining} orthonormal{ } eigenvectors of U. Then U has the representation as the sum of rank one matrices given by n T U = λjvjvj j=1 X This representation is often called the spectral respresentation or spectral decomposition of U. 162 CHAPTER 4. UNITARY MATRICES 4.1.1 Groups of matrices Invertible and unitary matrices have a fundamental structure that makes possible a great many general statements about their nature and the way they act upon vectors other vectors matrices. A group is a set with a math- ematical operation, product, that obeys some minimal set of properties so as to resemble the nonzero numbers under multiplication. Definition 4.1.2. A group G isasetwithabinaryoperationG G G which assigns to every pair a, b of elements of G a unique element×ab in→G. The operation, called the product,satisfies four properties: 1. Closure. If a, b G,thenab G. ∈ ∈ 2. Associativity. If a, b, c G,thena(bc)=(ab)c. ∈ 3. Identity. There exists an element e G such that ae = ea = a for every a G. e is called the identity of∈ G. ∈ 4. Inverse. For each a G, there exists an elementa ˆ G such that ∈ ∈ 1 aaˆ =ˆaa = e.ˆa is called the inverse of a and is often denoted by a− . A subset of G that is itself a group under the same product is called a subgroup of G. It may be interesting to note that removal of any of the properties 2-4 leads to other categories of sets that have interest, and in fact applications, in their own right. Moreover, many groups have additional properties such as commutativity, i.e. ab = ba for all a, b G.Belowareafewexamplesof matrix groups. Note matrix addition is not∈ involved in these definitions. Example 4.1.2. As usual Mn is the vector space of n n matrices.
Recommended publications
  • Lecture Notes: Qubit Representations and Rotations
    Phys 711 Topics in Particles & Fields | Spring 2013 | Lecture 1 | v0.3 Lecture notes: Qubit representations and rotations Jeffrey Yepez Department of Physics and Astronomy University of Hawai`i at Manoa Watanabe Hall, 2505 Correa Road Honolulu, Hawai`i 96822 E-mail: [email protected] www.phys.hawaii.edu/∼yepez (Dated: January 9, 2013) Contents mathematical object (an abstraction of a two-state quan- tum object) with a \one" state and a \zero" state: I. What is a qubit? 1 1 0 II. Time-dependent qubits states 2 jqi = αj0i + βj1i = α + β ; (1) 0 1 III. Qubit representations 2 A. Hilbert space representation 2 where α and β are complex numbers. These complex B. SU(2) and O(3) representations 2 numbers are called amplitudes. The basis states are or- IV. Rotation by similarity transformation 3 thonormal V. Rotation transformation in exponential form 5 h0j0i = h1j1i = 1 (2a) VI. Composition of qubit rotations 7 h0j1i = h1j0i = 0: (2b) A. Special case of equal angles 7 In general, the qubit jqi in (1) is said to be in a superpo- VII. Example composite rotation 7 sition state of the two logical basis states j0i and j1i. If References 9 α and β are complex, it would seem that a qubit should have four free real-valued parameters (two magnitudes and two phases): I. WHAT IS A QUBIT? iθ0 α φ0 e jqi = = iθ1 : (3) Let us begin by introducing some notation: β φ1 e 1 state (called \minus" on the Bloch sphere) Yet, for a qubit to contain only one classical bit of infor- 0 mation, the qubit need only be unimodular (normalized j1i = the alternate symbol is |−i 1 to unity) α∗α + β∗β = 1: (4) 0 state (called \plus" on the Bloch sphere) 1 Hence it lives on the complex unit circle, depicted on the j0i = the alternate symbol is j+i: 0 top of Figure 1.
    [Show full text]
  • The Inverse Along a Lower Triangular Matrix∗
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Universidade do Minho: RepositoriUM The inverse along a lower triangular matrix∗ Xavier Marya, Pedro Patr´ıciob aUniversit´eParis-Ouest Nanterre { La D´efense,Laboratoire Modal'X, 200 avenuue de la r´epublique,92000 Nanterre, France. email: [email protected] bDepartamento de Matem´aticae Aplica¸c~oes,Universidade do Minho, 4710-057 Braga, Portugal. email: [email protected] Abstract In this paper, we investigate the recently defined notion of inverse along an element in the context of matrices over a ring. Precisely, we study the inverse of a matrix along a lower triangular matrix, under some conditions. Keywords: Generalized inverse, inverse along an element, Dedekind-finite ring, Green's relations, rings AMS classification: 15A09, 16E50 1 Introduction In this paper, R is a ring with identity. We say a is (von Neumann) regular in R if a 2 aRa.A particular solution to axa = a is denoted by a−, and the set of all such solutions is denoted by af1g. Given a−; a= 2 af1g then x = a=aa− satisfies axa = a; xax = a simultaneously. Such a solution is called a reflexive inverse, and is denoted by a+. The set of all reflexive inverses of a is denoted by af1; 2g. Finally, a is group invertible if there is a# 2 af1; 2g that commutes with a, and a is Drazin invertible if ak is group invertible, for some non-negative integer k. This is equivalent to the existence of aD 2 R such that ak+1aD = ak; aDaaD = aD; aaD = aDa.
    [Show full text]
  • Orthogonal Reduction 1 the Row Echelon Form -.: Mathematical
    MATH 5330: Computational Methods of Linear Algebra Lecture 9: Orthogonal Reduction Xianyi Zeng Department of Mathematical Sciences, UTEP 1 The Row Echelon Form Our target is to solve the normal equation: AtAx = Atb ; (1.1) m×n where A 2 R is arbitrary; we have shown previously that this is equivalent to the least squares problem: min jjAx−bjj : (1.2) x2Rn t n×n As A A2R is symmetric positive semi-definite, we can try to compute the Cholesky decom- t t n×n position such that A A = L L for some lower-triangular matrix L 2 R . One problem with this approach is that we're not fully exploring our information, particularly in Cholesky decomposition we treat AtA as a single entity in ignorance of the information about A itself. t m×m In particular, the structure A A motivates us to study a factorization A=QE, where Q2R m×n is orthogonal and E 2 R is to be determined. Then we may transform the normal equation to: EtEx = EtQtb ; (1.3) t m×m where the identity Q Q = Im (the identity matrix in R ) is used. This normal equation is equivalent to the least squares problem with E: t min Ex−Q b : (1.4) x2Rn Because orthogonal transformation preserves the L2-norm, (1.2) and (1.4) are equivalent to each n other. Indeed, for any x 2 R : jjAx−bjj2 = (b−Ax)t(b−Ax) = (b−QEx)t(b−QEx) = [Q(Qtb−Ex)]t[Q(Qtb−Ex)] t t t t t t t t 2 = (Q b−Ex) Q Q(Q b−Ex) = (Q b−Ex) (Q b−Ex) = Ex−Q b : Hence the target is to find an E such that (1.3) is easier to solve.
    [Show full text]
  • LU Decompositions We Seek a Factorization of a Square Matrix a Into the Product of Two Matrices Which Yields an Efficient Method
    LU Decompositions We seek a factorization of a square matrix A into the product of two matrices which yields an efficient method for solving the system where A is the coefficient matrix, x is our variable vector and is a constant vector for . The factorization of A into the product of two matrices is closely related to Gaussian elimination. Definition 1. A square matrix is said to be lower triangular if for all . 2. A square matrix is said to be unit lower triangular if it is lower triangular and each . 3. A square matrix is said to be upper triangular if for all . Examples 1. The following are all lower triangular matrices: , , 2. The following are all unit lower triangular matrices: , , 3. The following are all upper triangular matrices: , , We note that the identity matrix is only square matrix that is both unit lower triangular and upper triangular. Example Let . For elementary matrices (See solv_lin_equ2.pdf) , , and we find that . Now, if , then direct computation yields and . It follows that and, hence, that where L is unit lower triangular and U is upper triangular. That is, . Observe the key fact that the unit lower triangular matrix L “contains” the essential data of the three elementary matrices , , and . Definition We say that the matrix A has an LU decomposition if where L is unit lower triangular and U is upper triangular. We also call the LU decomposition an LU factorization. Example 1. and so has an LU decomposition. 2. The matrix has more than one LU decomposition. Two such LU factorizations are and .
    [Show full text]
  • Parametrization of 3×3 Unitary Matrices Based on Polarization
    Parametrization of 33 unitary matrices based on polarization algebra (May, 2018) José J. Gil Parametrization of 33 unitary matrices based on polarization algebra José J. Gil Universidad de Zaragoza. Pedro Cerbuna 12, 50009 Zaragoza Spain [email protected] Abstract A parametrization of 33 unitary matrices is presented. This mathematical approach is inspired by polarization algebra and is formulated through the identification of a set of three orthonormal three-dimensional Jones vectors representing respective pure polarization states. This approach leads to the representation of a 33 unitary matrix as an orthogonal similarity transformation of a particular type of unitary matrix that depends on six independent parameters, while the remaining three parameters correspond to the orthogonal matrix of the said transformation. The results obtained are applied to determine the structure of the second component of the characteristic decomposition of a 33 positive semidefinite Hermitian matrix. 1 Introduction In many branches of Mathematics, Physics and Engineering, 33 unitary matrices appear as key elements for solving a great variety of problems, and therefore, appropriate parameterizations in terms of minimum sets of nine independent parameters are required for the corresponding mathematical treatment. In this way, some interesting parametrizations have been obtained [1-8]. In particular, the Cabibbo-Kobayashi-Maskawa matrix (CKM matrix) [6,7], which represents information on the strength of flavour-changing weak decays and depends on four parameters, constitutes the core of a family of parametrizations of a 33 unitary matrix [8]. In this paper, a new general parametrization is presented, which is inspired by polarization algebra [9] through the structure of orthonormal sets of three-dimensional Jones vectors [10].
    [Show full text]
  • Handout 9 More Matrix Properties; the Transpose
    Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric).
    [Show full text]
  • On the Eigenvalues of Euclidean Distance Matrices
    “main” — 2008/10/13 — 23:12 — page 237 — #1 Volume 27, N. 3, pp. 237–250, 2008 Copyright © 2008 SBMAC ISSN 0101-8205 www.scielo.br/cam On the eigenvalues of Euclidean distance matrices A.Y. ALFAKIH∗ Department of Mathematics and Statistics University of Windsor, Windsor, Ontario N9B 3P4, Canada E-mail: [email protected] Abstract. In this paper, the notion of equitable partitions (EP) is used to study the eigenvalues of Euclidean distance matrices (EDMs). In particular, EP is used to obtain the characteristic poly- nomials of regular EDMs and non-spherical centrally symmetric EDMs. The paper also presents methods for constructing cospectral EDMs and EDMs with exactly three distinct eigenvalues. Mathematical subject classification: 51K05, 15A18, 05C50. Key words: Euclidean distance matrices, eigenvalues, equitable partitions, characteristic poly- nomial. 1 Introduction ( ) An n ×n nonzero matrix D = di j is called a Euclidean distance matrix (EDM) 1, 2,..., n r if there exist points p p p in some Euclidean space < such that i j 2 , ,..., , di j = ||p − p || for all i j = 1 n where || || denotes the Euclidean norm. i , ,..., Let p , i ∈ N = {1 2 n}, be the set of points that generate an EDM π π ( , ,..., ) D. An m-partition of D is an ordered sequence = N1 N2 Nm of ,..., nonempty disjoint subsets of N whose union is N. The subsets N1 Nm are called the cells of the partition. The n-partition of D where each cell consists #760/08. Received: 07/IV/08. Accepted: 17/VI/08. ∗Research supported by the Natural Sciences and Engineering Research Council of Canada and MITACS.
    [Show full text]
  • Triangular Factorization
    Chapter 1 Triangular Factorization This chapter deals with the factorization of arbitrary matrices into products of triangular matrices. Since the solution of a linear n n system can be easily obtained once the matrix is factored into the product× of triangular matrices, we will concentrate on the factorization of square matrices. Specifically, we will show that an arbitrary n n matrix A has the factorization P A = LU where P is an n n permutation matrix,× L is an n n unit lower triangular matrix, and U is an n ×n upper triangular matrix. In connection× with this factorization we will discuss pivoting,× i.e., row interchange, strategies. We will also explore circumstances for which A may be factored in the forms A = LU or A = LLT . Our results for a square system will be given for a matrix with real elements but can easily be generalized for complex matrices. The corresponding results for a general m n matrix will be accumulated in Section 1.4. In the general case an arbitrary m× n matrix A has the factorization P A = LU where P is an m m permutation× matrix, L is an m m unit lower triangular matrix, and U is an×m n matrix having row echelon structure.× × 1.1 Permutation matrices and Gauss transformations We begin by defining permutation matrices and examining the effect of premulti- plying or postmultiplying a given matrix by such matrices. We then define Gauss transformations and show how they can be used to introduce zeros into a vector. Definition 1.1 An m m permutation matrix is a matrix whose columns con- sist of a rearrangement of× the m unit vectors e(j), j = 1,...,m, in RI m, i.e., a rearrangement of the columns (or rows) of the m m identity matrix.
    [Show full text]
  • 4.1 RANK of a MATRIX Rank List Given Matrix M, the Following Are Equal
    page 1 of Section 4.1 CHAPTER 4 MATRICES CONTINUED SECTION 4.1 RANK OF A MATRIX rank list Given matrix M, the following are equal: (1) maximal number of ind cols (i.e., dim of the col space of M) (2) maximal number of ind rows (i.e., dim of the row space of M) (3) number of cols with pivots in the echelon form of M (4) number of nonzero rows in the echelon form of M You know that (1) = (3) and (2) = (4) from Section 3.1. To see that (3) = (4) just stare at some echelon forms. This one number (that all four things equal) is called the rank of M. As a special case, a zero matrix is said to have rank 0. how row ops affect rank Row ops don't change the rank because they don't change the max number of ind cols or rows. example 1 12-10 24-20 has rank 1 (maximally one ind col, by inspection) 48-40 000 []000 has rank 0 example 2 2540 0001 LetM= -2 1 -1 0 21170 To find the rank of M, use row ops R3=R1+R3 R4=-R1+R4 R2ØR3 R4=-R2+R4 2540 0630 to get the unreduced echelon form 0001 0000 Cols 1,2,4 have pivots. So the rank of M is 3. how the rank is limited by the size of the matrix IfAis7≈4then its rank is either 0 (if it's the zero matrix), 1, 2, 3 or 4. The rank can't be 5 or larger because there can't be 5 ind cols when there are only 4 cols to begin with.
    [Show full text]
  • 8.3 Positive Definite Matrices
    8.3. Positive Definite Matrices 433 Exercise 8.2.25 Show that every 2 2 orthog- [Hint: If a2 + b2 = 1, then a = cos θ and b = sinθ for × cos θ sinθ some angle θ.] onal matrix has the form − or sinθ cosθ cos θ sin θ Exercise 8.2.26 Use Theorem 8.2.5 to show that every for some angle θ. sinθ cosθ symmetric matrix is orthogonally diagonalizable. − 8.3 Positive Definite Matrices All the eigenvalues of any symmetric matrix are real; this section is about the case in which the eigenvalues are positive. These matrices, which arise whenever optimization (maximum and minimum) problems are encountered, have countless applications throughout science and engineering. They also arise in statistics (for example, in factor analysis used in the social sciences) and in geometry (see Section 8.9). We will encounter them again in Chapter 10 when describing all inner products in Rn. Definition 8.5 Positive Definite Matrices A square matrix is called positive definite if it is symmetric and all its eigenvalues λ are positive, that is λ > 0. Because these matrices are symmetric, the principal axes theorem plays a central role in the theory. Theorem 8.3.1 If A is positive definite, then it is invertible and det A > 0. Proof. If A is n n and the eigenvalues are λ1, λ2, ..., λn, then det A = λ1λ2 λn > 0 by the principal axes theorem (or× the corollary to Theorem 8.2.5). ··· If x is a column in Rn and A is any real n n matrix, we view the 1 1 matrix xT Ax as a real number.
    [Show full text]
  • Wronskian Solutions to the Kdv Equation Via B\" Acklund
    Wronskian solutions to the KdV equation via B¨acklund transformation Qi-fei Xuan∗, Mei-ying Ou, Da-jun Zhang† Department of Mathematics, Shanghai University, Shanghai 200444, P.R. China October 27, 2018 Abstract In the paper we discuss the B¨acklund transformation of the KdV equation between solitons and solitons, between negatons and negatons, between positons and positons, between rational solution and rational solution, and between complexitons and complexitons. We investigate the conditions that Wronskian entries satisfy for the bilinear B¨acklund transformation of the KdV equation. By choosing suitable Wronskian entries and the parameter in the bilinear B¨acklund transformation, we obtain transformations between many kinds of solutions. Keywords: the KdV equation, Wronskian solution, bilinear form, B¨acklund transformation 1 Introduction The Wronskian can be considered as a bridge connecting with many classical methods in soliton theory. This is not only because soliton solutions in Wronskian form can be obtained from the Darboux transformation[1], Sato theory[2, 3] and Wronskian technique[4]-[10], but also because the exponential polynomial for N-solitons derived from Hirota method[11, 12] and the matrix form given by the Inverse Scattering Transform[13, 14] can be transformed into a Wronskian by extracting some exponential factors. The special structure of a Wronskian contributes simple arXiv:0706.3487v1 [nlin.SI] 24 Jun 2007 forms of its derivatives, and this admits solution verification by direct substituting Wronskians into a bilinear soliton equation or a bilinear B¨acklund transformation(BT). This approach is re- ferred to as Wronskian technique[4]. In the approach a bilinear soliton equation is some algebraic identity provided that Wronskian entry vector satisfies some differential equation set which we call Wronskian condition.
    [Show full text]
  • Rotation Matrix - Wikipedia, the Free Encyclopedia Page 1 of 22
    Rotation matrix - Wikipedia, the free encyclopedia Page 1 of 22 Rotation matrix From Wikipedia, the free encyclopedia In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix rotates points in the xy -Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details). In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometry, physics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space. Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, i.e. an orthogonal matrix whose determinant is 1: . The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1. Contents 1 Rotations in two dimensions 1.1 Non-standard orientation
    [Show full text]