Fundamental Matrix for the System of Odes

Total Page:16

File Type:pdf, Size:1020Kb

Fundamental Matrix for the System of Odes Fundamental Matrices, Matrix Exp & Repeated Eigenvalues – Sections 7.7 & 7.8 Given fundamental solutions G GGdx = G x1,..., xn of the ODE Ax dt we put them in an nxn matrix Ψ=()G G , (txx )1 ,..., n with each of the solution vectors being a column. We call Ψ(t) a fundamental matrix for the system of ODEs. Example. G dx ⎛⎞12G = ⎜⎟x dt ⎝⎠21 − λ ⎛⎞12 2 det⎜⎟=− (1λ ) − 4 ⎝⎠21− λ =−−=−λλ2 23(3)(1) λ λ + 1 ⎛⎞12 So the eigenvalues of the matrix A=⎜⎟ ⎝⎠21 in our ODE are λ=3,-1. The corresponding eigenvectors are found by solving (A-λI)v=0 using Gaussian elimination. We find that ⎛⎞1 the eigenvector for eigenvalue 3 is: v = ⎜⎟ ⎝⎠1 ⎛⎞1 w = ⎜⎟ the eigenvector for eigenvalue -1 is: ⎝⎠−1 So the corresponding solution vectors for our ODE 11 system are G ==3tt⎛⎞G − ⎛ ⎞ ue12⎜⎟, ue ⎜ ⎟ 11− ⎝⎠ ⎝ ⎠ Our fundamental matrix is: − GG ⎛⎞ee3tt Ψ=()tuu() = 12⎜⎟3tt− ⎝⎠ee− 2 The general solution is c GGG=+ =Ψ⎛⎞1 yt() cu11 cu 2 2 () t⎜⎟ ⎝⎠c2 If you need to find c1,c2 satisfying some initial condition like y(t0)=b, then you need to solve cb G =Ψ⎛⎞⎛⎞11 = yt()00 () t ⎜⎟⎜⎟ ⎝⎠⎝⎠cb22 ⎛⎞cb ⎛⎞ 11=Ψ()t −1 ⎜⎟0 ⎜⎟ ⎝⎠cb22 ⎝⎠ The inverse of the fundamental matrix exists because the Wronskian is not 0. Matrix Exponential Recall that the Taylor series for exp(x) is 3 ∞ n x ==−⋅x ennn∑ ,!(1)21." n=0 n! The series converges for all real numbers x. It also converges for all complex numbers x and for all nxn matrices A. ∞ AAAn 23 exp(AIA ) ==++++∑ " n=0 n!2!3! So exp(0)=I=identity matrix. You can legally differentiate these Taylor series term by term. This says for scalar x and nxn matrix A, we have: ∞∞n dxAexp( ) 1 dxA() n− ==∑∑xnn1 A dxnn==00 n!! dx n ∞ 1 n−1 == AxAAxA∑ () exp( ) = ()n −1! n 1 And exp(0)=I, the identity matrix. 4 So we have a solution to the system of ODEs given by G dy G G G ==Ayy,(0); b dx G G yxAb= exp( ) When a matrix is diagonal D, it is easy to compute exp(D). ⎛⎞a 0 exp⎜⎟ ⎝⎠0 b ⎛⎞⎛⎞10a 0 11⎛⎞⎛⎞aa2300 =++ + +" ⎜⎟⎜⎟⎜⎟⎜⎟23 ⎝⎠⎝⎠01 0b 2!00bb 3! ⎝⎠⎝⎠ ⎛⎞a2 10++a +" ⎜⎟⎛⎞ea 0 ==⎜⎟2! 2 ⎜⎟b ⎜⎟b ⎝⎠0 e ⎜⎟01++b ⎝⎠2! When an nxn matrixGG A has n linearly independent eigenvectors vv1,..., n , corresponding to the eigenvalues λλwe can write 1,..., n 5 − ATDT= 1, λ ⎛⎞1 " 0 GG ⎜⎟ Tvv==()... & D #%# 1 n ⎜⎟ ⎜⎟0 " λ G ⎝⎠G n The reason is = λ Avvj jj. −−11= One has exp(TDT ) T exp( D ) T From this, it is easy to compute the solutions to the G system of ODEs dy G G G ==Ayy,(0); b dx G G yxAb= exp( ) ATDT= −1, λ ⎛⎞1 " 0 GG ⎜⎟ Tvv==()... & D #%# 1 n ⎜⎟ ⎜⎟λ ⎝⎠0 " n 6 Q(x)=exp(xD) and the fundamental matrix is Ψ(x)=TQ(x), where D is the diagonal matrix of eigenvalues of A and T is the matrix coming from the corresponding eigenvectors in the same order. exp(xA) is a fundamental matrix for our ODE Repeated Eigenvalues When an nxn matrix A has repeated eigenvalues it may not have n linearly independent eigenvectors. In that case it won’t be diagonalizable and it is said to be deficient. Example. 7 ⎛⎞11 A = ⎜⎟ ⎝⎠01 11− λ ⎛⎞= det⎜⎟ 0 ⎝⎠01− λ The roots of this are both 1. Gaussian elimination solves (A-I)x=0 ⎛⎞⎛⎞11− 1 0 1 AI−=⎜⎟⎜⎟ = ⎝⎠⎝⎠01100− Solutions have x2=0, x1 arbitrary. So we have only 1 linearly independent eigenvector v 1 G ==⎛⎞1 ⎛⎞ v ⎜⎟⎜⎟ v2 0 ⎝⎠⎝⎠ This gives us one solution to the ode 8 G 1 ⎛⎞ex dy ===GGx ⎛⎞ Ay;e y ⎜⎟ ⎜⎟ dx ⎝⎠0 ⎝⎠0 How to find a 2nd linearly independent solution? Our first idea is just to multiply this by x, but that will not be linearly independent as it is a multiple of the eigenvector v. Matrix Exp solves the problem. AB=BA implies exp(A+B)=expAexpB So our fundamental matrix is exp(xA) ⎛⎞01 ⎜⎟ A=I+N, N=⎝⎠00 ⎛⎞1 x 2 ⎜⎟ N =0 implies exp(xN)=I+xN=⎝⎠01 Ψ(t)=exp(x(I+N))=exp(xI)exp(xN) ⎛⎞exex x ⎜⎟x =⎝⎠0 e 9 G xxGG Another way: write =+ uxevew2 for some constant vector w to be determined. G x dwdxe()G dex G =+vw dx dx dx GG =+()exevewxx + x G dw x xxG G =+()exevew + dx GG =+Axev()xx ew xxGG =+xev eAw Here we used the fact that Av=v. 10 So now equate coefficients of x and 1 xxGG x xx G G () exevewxeveAw++=+ GG G ⎪⎧()vw+= Aw ⇔ ⎨ GG ⎩⎪xv= xv GG We need to vAIw=−(); solve for w ⎛⎞011 −= Gauss(|) A I v ⎜⎟ ⎝⎠000 Solution is x2=1, x1 arbitrary. Putting this all together: 10 GGG=+=xx x⎛⎞ + x ⎛⎞ uxevewxee2 ⎜⎟ ⎜⎟ ⎝⎠01 ⎝⎠ G ⎛⎞(1)xe+ x u = 2 ⎜⎟x ⎝⎠e Check that this gives the same fundamental matrix as exp did. 11 .
Recommended publications
  • Parametrizations of K-Nonnegative Matrices
    Parametrizations of k-Nonnegative Matrices Anna Brosowsky, Neeraja Kulkarni, Alex Mason, Joe Suk, Ewin Tang∗ October 2, 2017 Abstract Totally nonnegative (positive) matrices are matrices whose minors are all nonnegative (positive). We generalize the notion of total nonnegativity, as follows. A k-nonnegative (resp. k-positive) matrix has all minors of size k or less nonnegative (resp. positive). We give a generating set for the semigroup of k-nonnegative matrices, as well as relations for certain special cases, i.e. the k = n − 1 and k = n − 2 unitriangular cases. In the above two cases, we find that the set of k-nonnegative matrices can be partitioned into cells, analogous to the Bruhat cells of totally nonnegative matrices, based on their factorizations into generators. We will show that these cells, like the Bruhat cells, are homeomorphic to open balls, and we prove some results about the topological structure of the closure of these cells, and in fact, in the latter case, the cells form a Bruhat-like CW complex. We also give a family of minimal k-positivity tests which form sub-cluster algebras of the total positivity test cluster algebra. We describe ways to jump between these tests, and give an alternate description of some tests as double wiring diagrams. 1 Introduction A totally nonnegative (respectively totally positive) matrix is a matrix whose minors are all nonnegative (respectively positive). Total positivity and nonnegativity are well-studied phenomena and arise in areas such as planar networks, combinatorics, dynamics, statistics and probability. The study of total positivity and total nonnegativity admit many varied applications, some of which are explored in “Totally Nonnegative Matrices” by Fallat and Johnson [5].
    [Show full text]
  • Quantum Information
    Quantum Information J. A. Jones Michaelmas Term 2010 Contents 1 Dirac Notation 3 1.1 Hilbert Space . 3 1.2 Dirac notation . 4 1.3 Operators . 5 1.4 Vectors and matrices . 6 1.5 Eigenvalues and eigenvectors . 8 1.6 Hermitian operators . 9 1.7 Commutators . 10 1.8 Unitary operators . 11 1.9 Operator exponentials . 11 1.10 Physical systems . 12 1.11 Time-dependent Hamiltonians . 13 1.12 Global phases . 13 2 Quantum bits and quantum gates 15 2.1 The Bloch sphere . 16 2.2 Density matrices . 16 2.3 Propagators and Pauli matrices . 18 2.4 Quantum logic gates . 18 2.5 Gate notation . 21 2.6 Quantum networks . 21 2.7 Initialization and measurement . 23 2.8 Experimental methods . 24 3 An atom in a laser field 25 3.1 Time-dependent systems . 25 3.2 Sudden jumps . 26 3.3 Oscillating fields . 27 3.4 Time-dependent perturbation theory . 29 3.5 Rabi flopping and Fermi's Golden Rule . 30 3.6 Raman transitions . 32 3.7 Rabi flopping as a quantum gate . 32 3.8 Ramsey fringes . 33 3.9 Measurement and initialisation . 34 1 CONTENTS 2 4 Spins in magnetic fields 35 4.1 The nuclear spin Hamiltonian . 35 4.2 The rotating frame . 36 4.3 On-resonance excitation . 38 4.4 Excitation phases . 38 4.5 Off-resonance excitation . 39 4.6 Practicalities . 40 4.7 The vector model . 40 4.8 Spin echoes . 41 4.9 Measurement and initialisation . 42 5 Photon techniques 43 5.1 Spatial encoding .
    [Show full text]
  • Diagonalizing a Matrix
    Diagonalizing a Matrix Definition 1. We say that two square matrices A and B are similar provided there exists an invertible matrix P so that . 2. We say a matrix A is diagonalizable if it is similar to a diagonal matrix. Example 1. The matrices and are similar matrices since . We conclude that is diagonalizable. 2. The matrices and are similar matrices since . After we have developed some additional theory, we will be able to conclude that the matrices and are not diagonalizable. Theorem Suppose A, B and C are square matrices. (1) A is similar to A. (2) If A is similar to B, then B is similar to A. (3) If A is similar to B and if B is similar to C, then A is similar to C. Proof of (3) Since A is similar to B, there exists an invertible matrix P so that . Also, since B is similar to C, there exists an invertible matrix R so that . Now, and so A is similar to C. Thus, “A is similar to B” is an equivalence relation. Theorem If A is similar to B, then A and B have the same eigenvalues. Proof Since A is similar to B, there exists an invertible matrix P so that . Now, Since A and B have the same characteristic equation, they have the same eigenvalues. > Example Find the eigenvalues for . Solution Since is similar to the diagonal matrix , they have the same eigenvalues. Because the eigenvalues of an upper (or lower) triangular matrix are the entries on the main diagonal, we see that the eigenvalues for , and, hence, are .
    [Show full text]
  • Solutions to Math 53 Practice Second Midterm
    Solutions to Math 53 Practice Second Midterm 1. (20 points) dx dy (a) (10 points) Write down the general solution to the system of equations dt = x+y; dt = −13x−3y in terms of real-valued functions. (b) (5 points) The trajectories of this equation in the (x; y)-plane rotate around the origin. Is this dx rotation clockwise or counterclockwise? If we changed the first equation to dt = −x + y, would it change the direction of rotation? (c) (5 points) Suppose (x1(t); y1(t)) and (x2(t); y2(t)) are two solutions to this equation. Define the Wronskian determinant W (t) of these two solutions. If W (0) = 1, what is W (10)? Note: the material for this part will be covered in the May 8 and May 10 lectures. 1 1 (a) The corresponding matrix is A = , with characteristic polynomial (1 − λ)(−3 − −13 −3 λ) + 13 = 0, so that λ2 + 2λ + 10 = 0, so that λ = −1 ± 3i. The corresponding eigenvector v to λ1 = −1 + 3i must be a solution to 2 − 3i 1 −13 −2 − 3i 1 so we can take v = Now we just need to compute the real and imaginary parts of 3i − 2 e3it cos(3t) + i sin(3t) veλ1t = e−t = e−t : (3i − 2)e3it −2 cos(3t) − 3 sin(3t) + i(3 cos(3t) − 2 sin(3t)) The general solution is thus expressed as cos(3t) sin(3t) c e−t + c e−t : 1 −2 cos(3t) − 3 sin(3t) 2 (3 cos(3t) − 2 sin(3t)) dx (b) We can examine the direction field.
    [Show full text]
  • MATH 237 Differential Equations and Computer Methods
    Queen’s University Mathematics and Engineering and Mathematics and Statistics MATH 237 Differential Equations and Computer Methods Supplemental Course Notes Serdar Y¨uksel November 19, 2010 This document is a collection of supplemental lecture notes used for Math 237: Differential Equations and Computer Methods. Serdar Y¨uksel Contents 1 Introduction to Differential Equations 7 1.1 Introduction: ................................... ....... 7 1.2 Classification of Differential Equations . ............... 7 1.2.1 OrdinaryDifferentialEquations. .......... 8 1.2.2 PartialDifferentialEquations . .......... 8 1.2.3 Homogeneous Differential Equations . .......... 8 1.2.4 N-thorderDifferentialEquations . ......... 8 1.2.5 LinearDifferentialEquations . ......... 8 1.3 Solutions of Differential equations . .............. 9 1.4 DirectionFields................................. ........ 10 1.5 Fundamental Questions on First-Order Differential Equations............... 10 2 First-Order Ordinary Differential Equations 11 2.1 ExactDifferentialEquations. ........... 11 2.2 MethodofIntegratingFactors. ........... 12 2.3 SeparableDifferentialEquations . ............ 13 2.4 Differential Equations with Homogenous Coefficients . ................ 13 2.5 First-Order Linear Differential Equations . .............. 14 2.6 Applications.................................... ....... 14 3 Higher-Order Ordinary Linear Differential Equations 15 3.1 Higher-OrderDifferentialEquations . ............ 15 3.1.1 LinearIndependence . ...... 16 3.1.2 WronskianofasetofSolutions . ........ 16 3.1.3 Non-HomogeneousProblem
    [Show full text]
  • The Exponential of a Matrix
    5-28-2012 The Exponential of a Matrix The solution to the exponential growth equation dx kt = kx is given by x = c e . dt 0 It is natural to ask whether you can solve a constant coefficient linear system ′ ~x = A~x in a similar way. If a solution to the system is to have the same form as the growth equation solution, it should look like At ~x = e ~x0. The first thing I need to do is to make sense of the matrix exponential eAt. The Taylor series for ez is ∞ n z z e = . n! n=0 X It converges absolutely for all z. It A is an n × n matrix with real entries, define ∞ n n At t A e = . n! n=0 X The powers An make sense, since A is a square matrix. It is possible to show that this series converges for all t and every matrix A. Differentiating the series term-by-term, ∞ ∞ ∞ ∞ n−1 n n−1 n n−1 n−1 m m d At t A t A t A t A At e = n = = A = A = Ae . dt n! (n − 1)! (n − 1)! m! n=0 n=1 n=1 m=0 X X X X At ′ This shows that e solves the differential equation ~x = A~x. The initial condition vector ~x(0) = ~x0 yields the particular solution At ~x = e ~x0. This works, because e0·A = I (by setting t = 0 in the power series). Another familiar property of ordinary exponentials holds for the matrix exponential: If A and B com- mute (that is, AB = BA), then A B A B e e = e + .
    [Show full text]
  • Chapter Four Determinants
    Chapter Four Determinants In the first chapter of this book we considered linear systems and we picked out the special case of systems with the same number of equations as unknowns, those of the form T~x = ~b where T is a square matrix. We noted a distinction between two classes of T ’s. While such systems may have a unique solution or no solutions or infinitely many solutions, if a particular T is associated with a unique solution in any system, such as the homogeneous system ~b = ~0, then T is associated with a unique solution for every ~b. We call such a matrix of coefficients ‘nonsingular’. The other kind of T , where every linear system for which it is the matrix of coefficients has either no solution or infinitely many solutions, we call ‘singular’. Through the second and third chapters the value of this distinction has been a theme. For instance, we now know that nonsingularity of an n£n matrix T is equivalent to each of these: ² a system T~x = ~b has a solution, and that solution is unique; ² Gauss-Jordan reduction of T yields an identity matrix; ² the rows of T form a linearly independent set; ² the columns of T form a basis for Rn; ² any map that T represents is an isomorphism; ² an inverse matrix T ¡1 exists. So when we look at a particular square matrix, the question of whether it is nonsingular is one of the first things that we ask. This chapter develops a formula to determine this. (Since we will restrict the discussion to square matrices, in this chapter we will usually simply say ‘matrix’ in place of ‘square matrix’.) More precisely, we will develop infinitely many formulas, one for 1£1 ma- trices, one for 2£2 matrices, etc.
    [Show full text]
  • Handout 9 More Matrix Properties; the Transpose
    Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric).
    [Show full text]
  • On the Eigenvalues of Euclidean Distance Matrices
    “main” — 2008/10/13 — 23:12 — page 237 — #1 Volume 27, N. 3, pp. 237–250, 2008 Copyright © 2008 SBMAC ISSN 0101-8205 www.scielo.br/cam On the eigenvalues of Euclidean distance matrices A.Y. ALFAKIH∗ Department of Mathematics and Statistics University of Windsor, Windsor, Ontario N9B 3P4, Canada E-mail: [email protected] Abstract. In this paper, the notion of equitable partitions (EP) is used to study the eigenvalues of Euclidean distance matrices (EDMs). In particular, EP is used to obtain the characteristic poly- nomials of regular EDMs and non-spherical centrally symmetric EDMs. The paper also presents methods for constructing cospectral EDMs and EDMs with exactly three distinct eigenvalues. Mathematical subject classification: 51K05, 15A18, 05C50. Key words: Euclidean distance matrices, eigenvalues, equitable partitions, characteristic poly- nomial. 1 Introduction ( ) An n ×n nonzero matrix D = di j is called a Euclidean distance matrix (EDM) 1, 2,..., n r if there exist points p p p in some Euclidean space < such that i j 2 , ,..., , di j = ||p − p || for all i j = 1 n where || || denotes the Euclidean norm. i , ,..., Let p , i ∈ N = {1 2 n}, be the set of points that generate an EDM π π ( , ,..., ) D. An m-partition of D is an ordered sequence = N1 N2 Nm of ,..., nonempty disjoint subsets of N whose union is N. The subsets N1 Nm are called the cells of the partition. The n-partition of D where each cell consists #760/08. Received: 07/IV/08. Accepted: 17/VI/08. ∗Research supported by the Natural Sciences and Engineering Research Council of Canada and MITACS.
    [Show full text]
  • 3.3 Diagonalization
    3.3 Diagonalization −4 1 1 1 Let A = 0 1. Then 0 1 and 0 1 are eigenvectors of A, with corresponding @ 4 −4 A @ 2 A @ −2 A eigenvalues −2 and −6 respectively (check). This means −4 1 1 1 −4 1 1 1 0 1 0 1 = −2 0 1 ; 0 1 0 1 = −6 0 1 : @ 4 −4 A @ 2 A @ 2 A @ 4 −4 A @ −2 A @ −2 A Thus −4 1 1 1 1 1 −2 −6 0 1 0 1 = 0−2 0 1 − 6 0 11 = 0 1 @ 4 −4 A @ 2 −2 A @ @ −2 A @ −2 AA @ −4 12 A We have −4 1 1 1 1 1 −2 0 0 1 0 1 = 0 1 0 1 @ 4 −4 A @ 2 −2 A @ 2 −2 A @ 0 −6 A 1 1 (Think about this). Thus AE = ED where E = 0 1 has the eigenvectors of A as @ 2 −2 A −2 0 columns and D = 0 1 is the diagonal matrix having the eigenvalues of A on the @ 0 −6 A main diagonal, in the order in which their corresponding eigenvectors appear as columns of E. Definition 3.3.1 A n × n matrix is A diagonal if all of its non-zero entries are located on its main diagonal, i.e. if Aij = 0 whenever i =6 j. Diagonal matrices are particularly easy to handle computationally. If A and B are diagonal n × n matrices then the product AB is obtained from A and B by simply multiplying entries in corresponding positions along the diagonal, and AB = BA.
    [Show full text]
  • Linear Independence, the Wronskian, and Variation of Parameters
    LINEAR INDEPENDENCE, THE WRONSKIAN, AND VARIATION OF PARAMETERS JAMES KEESLING In this post we determine when a set of solutions of a linear differential equation are linearly independent. We first discuss the linear space of solutions for a homogeneous differential equation. 1. Homogeneous Linear Differential Equations We start with homogeneous linear nth-order ordinary differential equations with general coefficients. The form for the nth-order type of equation is the following. dnx dn−1x (1) a (t) + a (t) + ··· + a (t)x = 0 n dtn n−1 dtn−1 0 It is straightforward to solve such an equation if the functions ai(t) are all constants. However, for general functions as above, it may not be so easy. However, we do have a principle that is useful. Because the equation is linear and homogeneous, if we have a set of solutions fx1(t); : : : ; xn(t)g, then any linear combination of the solutions is also a solution. That is (2) x(t) = C1x1(t) + C2x2(t) + ··· + Cnxn(t) is also a solution for any choice of constants fC1;C2;:::;Cng. Now if the solutions fx1(t); : : : ; xn(t)g are linearly independent, then (2) is the general solution of the differential equation. We will explain why later. What does it mean for the functions, fx1(t); : : : ; xn(t)g, to be linearly independent? The simple straightforward answer is that (3) C1x1(t) + C2x2(t) + ··· + Cnxn(t) = 0 implies that C1 = 0, C2 = 0, ::: , and Cn = 0 where the Ci's are arbitrary constants. This is the definition, but it is not so easy to determine from it just when the condition holds to show that a given set of functions, fx1(t); x2(t); : : : ; xng, is linearly independent.
    [Show full text]
  • The Exponential Function for Matrices
    The exponential function for matrices Matrix exponentials provide a concise way of describing the solutions to systems of homoge- neous linear differential equations that parallels the use of ordinary exponentials to solve simple differential equations of the form y0 = λ y. For square matrices the exponential function can be defined by the same sort of infinite series used in calculus courses, but some work is needed in order to justify the construction of such an infinite sum. Therefore we begin with some material needed to prove that certain infinite sums of matrices can be defined in a mathematically sound manner and have reasonable properties. Limits and infinite series of matrices Limits of vector valued sequences in Rn can be defined and manipulated much like limits of scalar valued sequences, the key adjustment being that distances between real numbers that are expressed in the form js−tj are replaced by distances between vectors expressed in the form jx−yj. 1 Similarly, one can talk about convergence of a vector valued infinite series n=0 vn in terms of n the convergence of the sequence of partial sums sn = i=0 vk. As in the case of ordinary infinite series, the best form of convergence is absolute convergence, which correspondsP to the convergence 1 P of the real valued infinite series jvnj with nonnegative terms. A fundamental theorem states n=0 1 that a vector valued infinite series converges if the auxiliary series jvnj does, and there is P n=0 a generalization of the standard M-test: If jvnj ≤ Mn for all n where Mn converges, then P n vn also converges.
    [Show full text]