Chapter 4 Systems of Linear Differential Equations

Total Page:16

File Type:pdf, Size:1020Kb

Chapter 4 Systems of Linear Differential Equations Chapter 4 Systems of Linear Differential Equations Introduction to Systems Up to this point the entries in a vector or matrix have been real numbers. In this section, and in the following sections, we will be dealing with vectors and matri- ces whose entries are functions. A vector whose components are functions is called a vector-valued function or vector function. Similarly, a matrix whose entries are functions is called a matrix function. The operations of vector and matrix addition, multiplication by a number and matrix multiplication for vector and matrix functions are exactly as defined in Chap- ter 5 so there is nothing new in terms of arithmetic. However, there are operations on functions other than arithmetic operations, e.g., limits, differentiation, and integra- tion, that we have to define for vector and matrix functions. These operation from calculus are defined in a natural way. Let v(t)=(f1(t),f2(t),...,fn(t)) be a vector function whose compo- nents are defined on an interval I. Limit: Let c ∈ I. If lim fi(t)=αi exists for i =1, 2,...n, then x→c lim v(t)=lim f1(t), lim f2(t), ..., lim fn(t) =(α1,α2,...,αn) . t→c t→c t→c t→c Limits of vector functions are calculated “component-wise.” Derivative: If f1,f2, ..., fn are differentiable on I, then v is 87 differentiable on I, and 0 0 0 0 v =((f1(t),f2(t), ..., fn(t)) . That is, v0 is the vector function whose components are the derivatives of the components of v. Integration: Since differentiation of vector functions is done component- wise, integration must also be component-wise. That is Z Z Z Z v(t) dt = f1(t) dt, f2(t) dt, . , fn(t) dt . Limits, differentiation and integration of matrix functions is done in ex- actly the same way, component-wise. 4.1. Systems of Linear Differential Equations Consider the third-order linear differential equation y000 + p(t)y00 + q(t)y0 + r(t)y = f(t) where p, q, r, f are continuous functions on some interval I. Solving the equation for y000, we get y000 = −r(t)y − q(t)y0 − p(t)y00 + f(t). Introduce new dependent variables x1,x2,x3, as follows: x1 = y 0 0 x2 = x1 (= y ) 0 00 x3 = x2 (= y ) Then 000 0 y = x3 = −r(t)x1 − q(t)x2 − p(t)x3 + f(t) and the third-order equation can be written equivalently as a system of three first- order equations: 0 x1 = x2 0 x2 = x3 0 x3 = −r(t)x1 − q(t)x2 − p(t)x3 + f(t) 88 Example 1. (a) Consider the third-order nonhomgeneous equation y000 − y00 − 8y0 +12y =2et. Solving the equation for y000, we have y000 = −12y +8y0 + y00 +2et. 0 0 0 00 Let x1 = y, x1 = x2 (= y ),x2 = x3 (= y ). Then 000 0 t y = x3 = −12x1 +8x2 + x3 +2e and the equation converts to the equivalent system: 0 x1 = x2 0 x2 = x3 0 t x3 = −12x1 +8x2 + x3 +2e Note: This system is just a very special case of the “general” system of three, first-order differential equations: 0 x1 = a11(t)x1 + a12(t)x2 + a13(t)x3(t)+b1(t) 0 x2 = a21(t)x1 + a22(t)x2 + a23(t)x3(t)+b2(t) 0 x3 = a31(t)x1 + a32(t)x2 + a33(t)x3(t)+b3(t) (b) Consider the second-order homogeneous equation t2y00 − ty0 − 3y =0. Solving this equation for y00, we get 3 1 y00 = y + y0. t2 t 0 0 To convert this equation to an equivalent system, we let x1 = y, x1 = x2 (= y ). Then we have 0 x1 = x2 3 1 x0 = x + x 2 t2 1 t 2 which is just a special case of the general system of two first-order differential equa- tions: 0 x1 = a11(t)x1 + a12(t)x2 + b1(t) 0 x2 = a21(t)x1 + a22(t)x2 + b2(t) 89 General Theory Let a11(t),a12(t),...,a1n(t),a21(t),...,ann(t),b1(t),b2(t), ..., bn(t) be contin- uous functions on some interval I. The system of n first-order differential equations 0 x1 = a11(t)x1 + a12(t)x2 + ···+ a1n(t)xn(t)+b1(t) 0 x2 = a21(t)x1 + a22(t)x2 + ···+ a2n(t)xn(t)+b2(t) . (S) . 0 xn = an1(t)x1 + an2(t)x2 + ···+ ann(t)xn(t)+bn(t) is called a first-order linear differential system. The system (S) is homogeneous if b1(t) ≡ b2(t) ≡···≡bn(t) ≡ 0onI. (S) is nonhomogeneous if the functions bi(t) are not all identically zero on I; that is, if there is at least one point a ∈ I and at least one function bi(t) such that bi(a) =0.6 Let A(t) be the n × n matrix a11(t) a12(t) ··· a1n(t) a21(t) a22(t) ··· a2n(t) A(t)= . . an1(t) an2(t) ··· ann(t) and let x and b be the vectors x1 b1 x2 b2 x = . , b = . . . . . xn bn Then (S) can be written in the vector-matrix form x0 = A(t) x + b. (S) The matrix A(t) is called the matrix of coefficients or the coefficient matrix. Example 2. The vector-matrix form of the system in Example 1(a) is: 010 0 0 x = 001x + 0 , −1281 2et 90 a nonhomogeneous system. The vector-matrix form of the system in Example 1(b) is: 0 01! 0 ! 01! x1 ! x = 2 x + = 2 x, where x = , 3/t 1/t 0 3/t 1/t x2 a homogeneous system. A solution of the linear differential system (S) is a differentiable vector function x1(t) x2(t) x(t)= . . . xn(t) that satisfies (S) on the interval I. 2t 1 t e 2 e Example 3. Verify that x(t)= 2t + 1 t is a solution of the nonho- 2e 2 e 2t 1 t 4e 2 e mogeneous system 010 0 0 x = 001x + 0 −1281 2et of Example 2. 91 SOLUTION 0 2t 1 t e 2 e x0 = 2t + 1 t 2e 2 e 2t 1 t 4e 2 e 2t 1 t 2e 2 e = 2t + 1 t 4e 2 e 2t 1 t 8e 2 e 2t 1 t 010 e 2 e 0 =? 001 2e2t + 1 et + 0 2 − 2t 1 t t 1281 4e 2 e 2e 2t 1 t 010 e 010 2 e 0 =? 2t + 1 t + 001 2e 001 2 e 0 − 2t − 1 t t 1281 4e 1281 2 e 2e 2t 1 t 2e 2 e 0 = 2t + 1 t + 4e 2 e 0 2t − 3 t t 8e 2 e 2e 2t 1 t 2e 2 e = 2t + 1 t . 4e 2 e 2t 1 t 8e 2 e x is a solution. THEOREM 1. (Existence and Uniqueness Theorem) Let a be any point on the interval I, and let α1,α2,...,αn be any n real numbers. Then the initial-value problem α1 0 α2 x = A(t) x + b(t), x(a)= . . . αn has a unique solution. Exercises 4.1 Convert the differential equation into a system of first-order equations. 1. y00 − ty0 +3y = sin 2t. 92 2. y00 + y =2e−2t. 3. y000 − y00 + y = et. 4. my00 + cy0 + ky = cos λt, m, c, k, λ are constants. Write the system in vector-matrix form. 5. 0 x1 = −2x1 + x2 + sin t 0 x2 = x1 − 3x2 − 2 cos t 6. 0 t 2t x1 = e x1 − e x2 0 −t t x2 = e x1 − 3e x2 7. 0 2t x1 =2x1 + x2 +3x3 +3e 0 x2 = x1 − 3x2 − 2 cos t 0 x3 =2x1 − x2 +4x3 + t 8. 0 2 x1 = t x1 + x2 − tx3 +3 0 t −2t x2 = −3e x2 +2x3 − 2e 0 2 x3 =2x1 + t x2 +4x3 t−1 ! 9. Verify that u(t)= is a solution of the system in Example 1 (b). −t−2 −3t 1 t e 2 e 10. Verify that u(t)= −3e−3t + 1 et is a solution of the system in 2 −3t 1 t 9e 2 e Example 1 (a). te2t 2t 2t 11. Verify that w(t)= e +2te is a solution of the homogeneous system 4e2t +4te2t associated with the system in Example 1 (a). 93 − sin t ! 12. Verify that x(t)= is a solution of the system − cos t − 2 sin t −21! 0 ! x0 = x + . −32 2 sin t −2e−2t 13. Verify that x(t)= 0 is a solution of the system 3e−2t 1 −32 0 x = 0 −10x. 0 −1 −2 4.2. Homogeneous Systems In this section we give the basic theory for linear homogeneous systems. This “theory” is simply a repetition results given in Sections 3.2 and 3.6, phrased this time in terms of the system 0 x1 = a11(t)x1 + a12(t)x2 + ···+ a1n(t)xn(t) 0 x2 = a21(t)x1 + a22(t)x2 + ···+ a2n(t)xn(t) . (H) . 0 xn = an1(t)x1 + an2(t)x2 + ···+ ann(t)xn(t) or x0 = A(t)x. (H) 0 0 Note first that the zero vector z(t) ≡ 0 = . is a solution of (H). As before, . . 0 this solution is called the trivial solution. Of course, we are interested in finding nontrivial solutions. THEOREM 1. If x1, x2,...,xk are solutions of (H), and if c1,c2,...,ck are real numbers, then c1x1 + c2x2 + ···+ ckxk is a solution of (H); any linear combination of solutions of (H) is also a solution of (H).
Recommended publications
  • Parametrizations of K-Nonnegative Matrices
    Parametrizations of k-Nonnegative Matrices Anna Brosowsky, Neeraja Kulkarni, Alex Mason, Joe Suk, Ewin Tang∗ October 2, 2017 Abstract Totally nonnegative (positive) matrices are matrices whose minors are all nonnegative (positive). We generalize the notion of total nonnegativity, as follows. A k-nonnegative (resp. k-positive) matrix has all minors of size k or less nonnegative (resp. positive). We give a generating set for the semigroup of k-nonnegative matrices, as well as relations for certain special cases, i.e. the k = n − 1 and k = n − 2 unitriangular cases. In the above two cases, we find that the set of k-nonnegative matrices can be partitioned into cells, analogous to the Bruhat cells of totally nonnegative matrices, based on their factorizations into generators. We will show that these cells, like the Bruhat cells, are homeomorphic to open balls, and we prove some results about the topological structure of the closure of these cells, and in fact, in the latter case, the cells form a Bruhat-like CW complex. We also give a family of minimal k-positivity tests which form sub-cluster algebras of the total positivity test cluster algebra. We describe ways to jump between these tests, and give an alternate description of some tests as double wiring diagrams. 1 Introduction A totally nonnegative (respectively totally positive) matrix is a matrix whose minors are all nonnegative (respectively positive). Total positivity and nonnegativity are well-studied phenomena and arise in areas such as planar networks, combinatorics, dynamics, statistics and probability. The study of total positivity and total nonnegativity admit many varied applications, some of which are explored in “Totally Nonnegative Matrices” by Fallat and Johnson [5].
    [Show full text]
  • Solutions to Math 53 Practice Second Midterm
    Solutions to Math 53 Practice Second Midterm 1. (20 points) dx dy (a) (10 points) Write down the general solution to the system of equations dt = x+y; dt = −13x−3y in terms of real-valued functions. (b) (5 points) The trajectories of this equation in the (x; y)-plane rotate around the origin. Is this dx rotation clockwise or counterclockwise? If we changed the first equation to dt = −x + y, would it change the direction of rotation? (c) (5 points) Suppose (x1(t); y1(t)) and (x2(t); y2(t)) are two solutions to this equation. Define the Wronskian determinant W (t) of these two solutions. If W (0) = 1, what is W (10)? Note: the material for this part will be covered in the May 8 and May 10 lectures. 1 1 (a) The corresponding matrix is A = , with characteristic polynomial (1 − λ)(−3 − −13 −3 λ) + 13 = 0, so that λ2 + 2λ + 10 = 0, so that λ = −1 ± 3i. The corresponding eigenvector v to λ1 = −1 + 3i must be a solution to 2 − 3i 1 −13 −2 − 3i 1 so we can take v = Now we just need to compute the real and imaginary parts of 3i − 2 e3it cos(3t) + i sin(3t) veλ1t = e−t = e−t : (3i − 2)e3it −2 cos(3t) − 3 sin(3t) + i(3 cos(3t) − 2 sin(3t)) The general solution is thus expressed as cos(3t) sin(3t) c e−t + c e−t : 1 −2 cos(3t) − 3 sin(3t) 2 (3 cos(3t) − 2 sin(3t)) dx (b) We can examine the direction field.
    [Show full text]
  • Chapter Four Determinants
    Chapter Four Determinants In the first chapter of this book we considered linear systems and we picked out the special case of systems with the same number of equations as unknowns, those of the form T~x = ~b where T is a square matrix. We noted a distinction between two classes of T ’s. While such systems may have a unique solution or no solutions or infinitely many solutions, if a particular T is associated with a unique solution in any system, such as the homogeneous system ~b = ~0, then T is associated with a unique solution for every ~b. We call such a matrix of coefficients ‘nonsingular’. The other kind of T , where every linear system for which it is the matrix of coefficients has either no solution or infinitely many solutions, we call ‘singular’. Through the second and third chapters the value of this distinction has been a theme. For instance, we now know that nonsingularity of an n£n matrix T is equivalent to each of these: ² a system T~x = ~b has a solution, and that solution is unique; ² Gauss-Jordan reduction of T yields an identity matrix; ² the rows of T form a linearly independent set; ² the columns of T form a basis for Rn; ² any map that T represents is an isomorphism; ² an inverse matrix T ¡1 exists. So when we look at a particular square matrix, the question of whether it is nonsingular is one of the first things that we ask. This chapter develops a formula to determine this. (Since we will restrict the discussion to square matrices, in this chapter we will usually simply say ‘matrix’ in place of ‘square matrix’.) More precisely, we will develop infinitely many formulas, one for 1£1 ma- trices, one for 2£2 matrices, etc.
    [Show full text]
  • Handout 9 More Matrix Properties; the Transpose
    Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric).
    [Show full text]
  • On the Eigenvalues of Euclidean Distance Matrices
    “main” — 2008/10/13 — 23:12 — page 237 — #1 Volume 27, N. 3, pp. 237–250, 2008 Copyright © 2008 SBMAC ISSN 0101-8205 www.scielo.br/cam On the eigenvalues of Euclidean distance matrices A.Y. ALFAKIH∗ Department of Mathematics and Statistics University of Windsor, Windsor, Ontario N9B 3P4, Canada E-mail: [email protected] Abstract. In this paper, the notion of equitable partitions (EP) is used to study the eigenvalues of Euclidean distance matrices (EDMs). In particular, EP is used to obtain the characteristic poly- nomials of regular EDMs and non-spherical centrally symmetric EDMs. The paper also presents methods for constructing cospectral EDMs and EDMs with exactly three distinct eigenvalues. Mathematical subject classification: 51K05, 15A18, 05C50. Key words: Euclidean distance matrices, eigenvalues, equitable partitions, characteristic poly- nomial. 1 Introduction ( ) An n ×n nonzero matrix D = di j is called a Euclidean distance matrix (EDM) 1, 2,..., n r if there exist points p p p in some Euclidean space < such that i j 2 , ,..., , di j = ||p − p || for all i j = 1 n where || || denotes the Euclidean norm. i , ,..., Let p , i ∈ N = {1 2 n}, be the set of points that generate an EDM π π ( , ,..., ) D. An m-partition of D is an ordered sequence = N1 N2 Nm of ,..., nonempty disjoint subsets of N whose union is N. The subsets N1 Nm are called the cells of the partition. The n-partition of D where each cell consists #760/08. Received: 07/IV/08. Accepted: 17/VI/08. ∗Research supported by the Natural Sciences and Engineering Research Council of Canada and MITACS.
    [Show full text]
  • Linear Independence, the Wronskian, and Variation of Parameters
    LINEAR INDEPENDENCE, THE WRONSKIAN, AND VARIATION OF PARAMETERS JAMES KEESLING In this post we determine when a set of solutions of a linear differential equation are linearly independent. We first discuss the linear space of solutions for a homogeneous differential equation. 1. Homogeneous Linear Differential Equations We start with homogeneous linear nth-order ordinary differential equations with general coefficients. The form for the nth-order type of equation is the following. dnx dn−1x (1) a (t) + a (t) + ··· + a (t)x = 0 n dtn n−1 dtn−1 0 It is straightforward to solve such an equation if the functions ai(t) are all constants. However, for general functions as above, it may not be so easy. However, we do have a principle that is useful. Because the equation is linear and homogeneous, if we have a set of solutions fx1(t); : : : ; xn(t)g, then any linear combination of the solutions is also a solution. That is (2) x(t) = C1x1(t) + C2x2(t) + ··· + Cnxn(t) is also a solution for any choice of constants fC1;C2;:::;Cng. Now if the solutions fx1(t); : : : ; xn(t)g are linearly independent, then (2) is the general solution of the differential equation. We will explain why later. What does it mean for the functions, fx1(t); : : : ; xn(t)g, to be linearly independent? The simple straightforward answer is that (3) C1x1(t) + C2x2(t) + ··· + Cnxn(t) = 0 implies that C1 = 0, C2 = 0, ::: , and Cn = 0 where the Ci's are arbitrary constants. This is the definition, but it is not so easy to determine from it just when the condition holds to show that a given set of functions, fx1(t); x2(t); : : : ; xng, is linearly independent.
    [Show full text]
  • Section 2.4–2.5 Partitioned Matrices and LU Factorization
    Section 2.4{2.5 Partitioned Matrices and LU Factorization Gexin Yu [email protected] College of William and Mary Gexin Yu [email protected] Section 2.4{2.5 Partitioned Matrices and LU Factorization One approach to simplify the computation is to partition a matrix into blocks. 2 3 0 −1 5 9 −2 3 Ex: A = 4 −5 2 4 0 −3 1 5. −8 −6 3 1 7 −4 This partition can also be written as the following 2 × 3 block matrix: A A A A = 11 12 13 A21 A22 A23 3 0 −1 In the block form, we have blocks A = and so on. 11 −5 2 4 partition matrices into blocks In real world problems, systems can have huge numbers of equations and un-knowns. Standard computation techniques are inefficient in such cases, so we need to develop techniques which exploit the internal structure of the matrices. In most cases, the matrices of interest have lots of zeros. Gexin Yu [email protected] Section 2.4{2.5 Partitioned Matrices and LU Factorization 2 3 0 −1 5 9 −2 3 Ex: A = 4 −5 2 4 0 −3 1 5. −8 −6 3 1 7 −4 This partition can also be written as the following 2 × 3 block matrix: A A A A = 11 12 13 A21 A22 A23 3 0 −1 In the block form, we have blocks A = and so on. 11 −5 2 4 partition matrices into blocks In real world problems, systems can have huge numbers of equations and un-knowns.
    [Show full text]
  • An Introduction to the HESSIAN
    An introduction to the HESSIAN 2nd Edition Photograph of Ludwig Otto Hesse (1811-1874), circa 1860 https://upload.wikimedia.org/wikipedia/commons/6/65/Ludwig_Otto_Hesse.jpg See page for author [Public domain], via Wikimedia Commons I. As the students of the first two years of mathematical analysis well know, the Wronskian, Jacobian and Hessian are the names of three determinants and matrixes, which were invented in the nineteenth century, to make life easy to the mathematicians and to provide nightmares to the students of mathematics. II. I don’t know how the Hessian came to the mind of Ludwig Otto Hesse (1811-1874), a quiet professor father of nine children, who was born in Koenigsberg (I think that Koenigsberg gave birth to a disproportionate 1 number of famous men). It is possible that he was studying the problem of finding maxima, minima and other anomalous points on a bi-dimensional surface. (An alternative hypothesis will be presented in section X. ) While pursuing such study, in one variable, one first looks for the points where the first derivative is zero (if it exists at all), and then examines the second derivative at each of those points, to find out its “quality”, whether it is a maximum, a minimum, or an inflection point. The variety of anomalies on a bi-dimensional surface is larger than for a one dimensional line, and one needs to employ more powerful mathematical instruments. Still, also in two dimensions one starts by looking for points where the two first partial derivatives, with respect to x and with respect to y respectively, exist and are both zero.
    [Show full text]
  • Rotation Matrix - Wikipedia, the Free Encyclopedia Page 1 of 22
    Rotation matrix - Wikipedia, the free encyclopedia Page 1 of 22 Rotation matrix From Wikipedia, the free encyclopedia In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix rotates points in the xy -Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details). In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometry, physics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space. Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, i.e. an orthogonal matrix whose determinant is 1: . The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1. Contents 1 Rotations in two dimensions 1.1 Non-standard orientation
    [Show full text]
  • Math 513. Class Worksheet on Jordan Canonical Form. Professor Karen E
    Math 513. Class Worksheet on Jordan Canonical Form. Professor Karen E. Smith Monday February 20, 2012 Let λ be a complex number. The Jordan Block of size n and eigenvalue λ is the n × n matrix (1) whose diagonal elements aii are all λ, (2) whose entries ai+1 i just below the diagonal are all 1, and (3) with all other entries zero. Theorem: Let A be an n × n matrix over C. Then there exists a similar matrix J which is block-diagonal, and all of whose blocks are Jordan blocks (of possibly differing sizes and eigenvalues). The matrix J is the unique matrix similar to A of this form, up to reordering the Jordan blocks. It is called the Jordan canonical form of A. 1. Compute the characteristic polynomial for Jn(λ). 2. Compute the characteristic polynomial for a matrix with two Jordan blocks Jn1 (λ1) and Jn2 (λ2): 3. Describe the characteristic polynomial for any n × n matrix in terms of its Jordan form. 4. Describe the possible Jordan forms (up to rearranging the blocks) of a matrix whose charac- teristic polynomial is (t − 1)(t − 2)(t − 3)2: Describe the possible Jordan forms of a matrix (up to rearranging the blocks) whose characteristic polynomial is (t − 1)3(t − 2)2 5. Consider the action of GL2(C) on the set of all 2 × 2 matrices by conjugation. Using Jordan form, find a nice list of matrices parametrizing the orbits. That is, find a list of matrices such that every matrix is in the orbit of exactly one matrix on your list.
    [Show full text]
  • Totally Positive Toeplitz Matrices and Quantum Cohomology of Partial Flag Varieties
    JOURNAL OF THE AMERICAN MATHEMATICAL SOCIETY Volume 16, Number 2, Pages 363{392 S 0894-0347(02)00412-5 Article electronically published on November 29, 2002 TOTALLY POSITIVE TOEPLITZ MATRICES AND QUANTUM COHOMOLOGY OF PARTIAL FLAG VARIETIES KONSTANZE RIETSCH 1. Introduction A matrix is called totally nonnegative if all of its minors are nonnegative. Totally nonnegative infinite Toeplitz matrices were studied first in the 1950's. They are characterized in the following theorem conjectured by Schoenberg and proved by Edrei. Theorem 1.1 ([10]). The Toeplitz matrix ∞×1 1 a1 1 0 1 a2 a1 1 B . .. C B . a2 a1 . C B C A = B .. .. .. C B ad . C B C B .. .. C Bad+1 . a1 1 C B C B . C B . .. .. a a .. C B 2 1 C B . C B .. .. .. .. ..C B C is totally nonnegative@ precisely if its generating function is of theA form, 2 (1 + βit) 1+a1t + a2t + =exp(tα) ; ··· (1 γit) i Y2N − where α R 0 and β1 β2 0,γ1 γ2 0 with βi + γi < . 2 ≥ ≥ ≥···≥ ≥ ≥···≥ 1 This beautiful result has been reproved many times; see [32]P for anP overview. It may be thought of as giving a parameterization of the totally nonnegative Toeplitz matrices by ~ N N ~ (α;(βi)i; (~γi)i) R 0 R 0 R 0 i(βi +~γi) < ; f 2 ≥ × ≥ × ≥ j 1g i X2N where β~i = βi βi+1 andγ ~i = γi γi+1. − − Received by the editors December 10, 2001 and, in revised form, September 14, 2002. 2000 Mathematics Subject Classification. Primary 20G20, 15A48, 14N35, 14N15.
    [Show full text]
  • Linear Operators Hsiu-Hau Lin [email protected] (Mar 25, 2010)
    Linear Operators Hsiu-Hau Lin [email protected] (Mar 25, 2010) The notes cover linear operators and discuss linear independence of func- tions (Boas 3.7-3.8). • Linear operators An operator maps one thing into another. For instance, the ordinary func- tions are operators mapping numbers to numbers. A linear operator satisfies the properties, O(A + B) = O(A) + O(B);O(kA) = kO(A); (1) where k is a number. As we learned before, a matrix maps one vector into another. One also notices that M(r1 + r2) = Mr1 + Mr2;M(kr) = kMr: Thus, matrices are linear operators. • Orthogonal matrix The length of a vector remains invariant under rotations, ! ! x0 x x0 y0 = x y M T M : y0 y The constraint can be elegantly written down as a matrix equation, M T M = MM T = 1: (2) In other words, M T = M −1. For matrices satisfy the above constraint, they are called orthogonal matrices. Note that, for orthogonal matrices, computing inverse is as simple as taking transpose { an extremely helpful property for calculations. From the product theorem for the determinant, we immediately come to the conclusion det M = ±1. In two dimensions, any 2 × 2 orthogonal matrix with determinant 1 corresponds to a rotation, while any 2 × 2 orthogonal HedgeHog's notes (March 24, 2010) 2 matrix with determinant −1 corresponds to a reflection about a line. Let's come back to our good old friend { the rotation matrix, cos θ − sin θ ! cos θ sin θ ! R(θ) = ;RT = : (3) sin θ cos θ − sin θ cos θ It is straightforward to check that RT R = RRT = 1.
    [Show full text]