Short Notes on Linear Algebra 1 Vector Spaces, Independence

Total Page:16

File Type:pdf, Size:1020Kb

Short Notes on Linear Algebra 1 Vector Spaces, Independence Short notes on Linear Algebra by Sanand D ||||||||||||||||||||||||||||||||||||||||- 1 Vector spaces, independence, dimension, basis, linear operators 1.1 Introduction Definition 1.1. Let V be a set whose elements are called vectors with an operation of vector addition (+) such that + is commutative and associative and the following properties are satisfied • f0g 2 V such that 0 + v = v 8v 2 V. • For all v 2 V, −v 2 V such that v + (−v) = 0. • For all v1; v2 2 V, v1 + v2 2 V. Let F be a field such that there is an operation of scalar multiplication (:) between elements of F and V satisfying the following properties • 1:v = v 8v 2 V, 1 2 F • (a1a2)v = a1(a2v), 8a1; a2 2 F • a(v + w) = av + aw, 8a 2 F and 8v; w 2 V • (a1 + a2)v = a1v + a2v, 8a1; a2 2 F. A set V satisfying the properties above is said to be a vector space over a field F. Example 1.2. Cn; Rn; Cn×m; Rn×m, solutions of homogeneous linear equations, solutions of homo- geneous odes, set of real/complex valued continuous/differentiable/analytic functions are examples of vector spaces. A set of polynomials with real/complex coefficients form a vector space over real/complex numbers. Definition 1.3. A subset W of V satisfying the properties above is called a subspace of V. Example 1.4. R; R2 form a subspace of R3. A set of differentiable functions form a subspace of a vector space of continuous functions. A set of complex polynomials of degree at most n form a subspace of C[x] (Geometrically, one can think of vector spaces as euclidean spaces and subspaces are planes passing through the origin.) Suppose v1; : : : ; vk 2 V. Then < v1; : : : ; vk > is the span of fv1; : : : ; vkg which is a collection of all linear combinations of fv1; : : : ; vkg. The span of any set forms a subspace. Definition 1.5. A set of non-zero vectors v1; : : : ; vk is said to be independent if α1v1+:::+αkvk = 0 implies that all αis (1 ≤ i ≤ k) are equal to zero. A set of vectors which is not independent is said to be dependent. (Geometrically if v depends on v1; : : : ; vk, then v lies in the space spanned by v1; : : : ; vk. Whereas if v is independent on v1; : : : ; vk, then v lies outside the space spanned by v1; : : : ; vk.) Definition 1.6. A maximal linearly independent set is called a basis. 1 Basis is not unique. By the definition of basis and linear independence, each v 2 V can be represented as a unique linear combination of basis vectors. Any linearly independent set can be extended to a basis by adjoining linearly independent vectors to the previous set. [ ] [ ] [ ] T T T Example 1.7. e1 = 1 0 ::: 0 ; e2 = 0 1 ::: 0 ; : : : ; en = 0 0 ::: 1 form a basis for Cn. This is called the standard basis. Definition 1.8. Dimension of a vector space is the cardinality of its basis. Example 1.9. Dimension of Cn over C is n. Dimension of C[x] over C is infinite with basis 1; x; : : : ; xn;:::. Dimension of a subspace of polynomials of degree at most n is n + 1 with basis 1; x; : : : ; xn. If V1 and V2 are two subspaces of V, then V1 +V2 and V1 \V2 are subspaces of V. By choosing a basis for V1 and extending to basis of V by taking linearly independent vectors in V2 which are not in V1, it follows that dim(V1 + V2) = dim(V1)+ dimV2− dim (V1 \ V2) Definition 1.10. If V1 + V2 = V and V1 \ V2 = f0g, then we say that V is a direct sum of V1 and V2. It is denoted by V = V1 ⊕ V2. Example 1.11. Cn = C ⊕ C ⊕ ::: ⊕ C i.e. direct sum of n copies of C. 1.2 Co-ordinates and linear maps on vector spaces Co-ordinates: Let fv1; : : : ; vng be a basis of V. Let v 2 V. Therefore, v can be expressed uniquely [as a linear combination] of vis, v = α1v1 + ::: + αnvn. Thus, w.r.t. this basis, v has co-ordinates T α1 α2 : : : αn . Matrix representation of linear operators: Let A : V ! V. A is said to be linear if for all v; w 2 V and c1; c2 2 F, A(c1v + c2w) = c1Av + c2Aw. Let fv1; : : : ; vng be a basis of V. Then for any v 2 V, Av = α1Av1 +:::+αnAvn. Thus, it is enough to define the action of A on basis vectors. 2 3 α1 6 7 [ ] 6 α2 7 6 7 Av = Av1 Av2 : : : Avn 6 : 7 : (1) 4 : 5 αn Let Avi = a1iv1 + ::: + anivn for 1 ≤ i ≤ n. Then, the matrix representation of A w.r.t. the basis above is 2 3 a11 a12 : : : a1n 6 7 6 a21 a22 : : : a2n 7 6 7 A = 6 :::::: 7 (2) 4 :::::: 5 an1 an2 : : : ann and, co-ordinates of Av w.r.t. the given basis are 2 3 2 3 a11 a12 : : : a1n α1 6 7 6 7 6 a21 a22 : : : a2n 7 6 α2 7 6 7 6 7 Av = 6 :::::: 7 6 : 7 : (3) 4 :::::: 5 4 : 5 an1 an2 : : : ann αn Definition 1.12. Kernel of an operator A : V ! W is the set of vectors v 2 V such that Av = 0. 2 Kernel of an operator A is denoted by ker(A) and it is a subspace of V. Image of A is set of all w 2 W such that w = Av for some v 2 V. Im(A) is also a subspace. When A is represented by a matrix w.r.t. some basis, then any vector in the image space of A is a linear combination of columns of A. Thus, Im(A) is spanned by columns of A. Definition 1.13. Rank of a matrix is equal to the number of its linearly independent columns. Consequently, it is equal to the dimension of the image space of A. Thus, Ax = b has a solution iff b lies in the column span of A iff the augmented matrix [A b] has the same rank as that of A. Note that A is onto , the rank of A is equal to the dimension of the codomain space and A is 1 − 1 , the dimension of ker(A) is equal to zero. It is 1 − 1 and onto , the dimension of ker(A) is equal to zero and the rank of A is equal to the dimension of the codomain space. (Note that Im(A) is a subspace of the codomain. If rank(A) = dimension of Im(A) is equal to the dimension of the codomain, then Im(A) = codomain space.) Theorem 1.14. (Rank-Nullity) Let A : V ! V where V is an n−dimensional vector space. Then rank(A)+dim(ker(A)) = n. Proof. Let v1; : : : ; vk for a basis for ker(A). Extend this set to a full basis of V say v1; : : : ; vn. Claim: fAvk+1; : : : ; Avng are linearly independent. Suppose not, therefore, αk+1Avk+1 +:::+αnAvn = 0. This implies that A(αk+1vk+1 +:::+αnvn) = 0, hence αk+1vk+1 +:::+αnvn 2 ker(A). Therefore, αk+1vk+1 +:::+αnvn = β1v1 +:::+βkvk. But this contradicts the linear independence of fv1; : : : ; vng. Therefore, fAvk+1; : : : ; Avng are linearly independent. Since rank (A) = dim(Im(A)), rank(A) = n − k and dim(ker(A)) = k. Suppose A : V ! W. Let fv1; : : : ; vng be a basis of V and fw1; : : : ; wmg be a basis for W. Then to find a matrix representation for A, it is enough to define Avi (1 ≤ i ≤ n). Avi = a1iw1 + ::: + amiwm (1 ≤ i ≤ n). Thus, A is represented by an m × n matrix A = [aij]. Rank- Nullity theorem holds for these linear maps as well where n is the dimension of the domain. Dual spaces: For a vector space V, its dual V∗ consists of all linear maps from V to F (the underlying field). If V is an n dimensional vector space whose elements are represented by column vectors w.r.t. some basis, then elements of V∗ are represented by 1 × n matrices which can be thought of as row vectors. Thus, the dual space of column vectors is row vectors and vice versa. f g V f ∗ ∗g V∗ ∗ For any basis v1; : : : ; vn of , there exists a dual basis v1; : : : ; vn of such that vi (vj) = 1 if ∗ 6 V ! W ∗ W∗ ! V∗ i = j and vi (vj) = 0 if i = j. A linear map A : induces a map A : . The action ∗ ∗ ∗ ∗ of A is defined as A w (v) := w (Av). Let A = [aij] be a matrix representation of A w.r.t. bases ∗ v1; : : : ; vn and w1; : : : ; wm of V and W respectively and [bij] be a representation of A w.r.t. dual ∗ ∗ ∗ W∗ ∗ bases. Consider the action of A on basis vectors w1; : : : ; wm of . Since A = [bij] is a matrix representation of A∗, ∗ ∗ ∗ ∗ A wi = (b1iv1 + ::: + bnivn) (4) ) ∗ ∗ ∗ ∗ A wi (vj) = (b1iv1 + ::: + bnivn)vj = bji: (5) Observe that since A∗w∗(v) = w∗(Av) from the definition of A∗, ∗ ∗ ∗ ∗ A wi (vj) = wi (Avj) = wi (a1jw1 + ::: + amjwm) = aij (6) (7) ∗ Thus, aij = bji and the matrix representation of induced map A is the transpose of A. Observe that (V∗)∗ = V i.e. the double dual of V is V itself for finite dimensional vector spaces. 3 2 Change of basis Let e1; : : : ; en be the standard basis of a vector space V. Let this be an old basis and let Aold be a matrix representation of a linear operator A : V ! V.
Recommended publications
  • The Rational and Jordan Forms Linear Algebra Notes
    The Rational and Jordan Forms Linear Algebra Notes Satya Mandal November 5, 2005 1 Cyclic Subspaces In a given context, a "cyclic thing" is an one generated "thing". For example, a cyclic groups is a one generated group. Likewise, a module M over a ring R is said to be a cyclic module if M is one generated or M = Rm for some m 2 M: We do not use the expression "cyclic vector spaces" because one generated vector spaces are zero or one dimensional vector spaces. 1.1 (De¯nition and Facts) Suppose V is a vector space over a ¯eld F; with ¯nite dim V = n: Fix a linear operator T 2 L(V; V ): 1. Write R = F[T ] = ff(T ) : f(X) 2 F[X]g L(V; V )g: Then R = F[T ]g is a commutative ring. (We did considered this ring in last chapter in the proof of Caley-Hamilton Theorem.) 2. Now V acquires R¡module structure with scalar multiplication as fol- lows: Define f(T )v = f(T )(v) 2 V 8 f(T ) 2 F[T ]; v 2 V: 3. For an element v 2 V de¯ne Z(v; T ) = F[T ]v = ff(T )v : f(T ) 2 Rg: 1 Note that Z(v; T ) is the cyclic R¡submodule generated by v: (I like the notation F[T ]v, the textbook uses the notation Z(v; T ).) We say, Z(v; T ) is the T ¡cyclic subspace generated by v: 4. If V = Z(v; T ) = F[T ]v; we say that that V is a T ¡cyclic space, and v is called the T ¡cyclic generator of V: (Here, I di®er a little from the textbook.) 5.
    [Show full text]
  • Linear Algebra Midterm Exam, April 5, 2007 Write Clearly, with Complete
    Mathematics 110 Name: GSI Name: Linear Algebra Midterm Exam, April 5, 2007 Write clearly, with complete sentences, explaining your work. You will be graded on clarity, style, and brevity. If you add false statements to a correct argument, you will lose points. Be sure to put your name and your GSI’s name on every page. 1. Let V and W be finite dimensional vector spaces over a field F . (a) What is the definition of the transpose of a linear transformation T : V → W ? What is the relationship between the rank of T and the rank of its transpose? (No proof is required here.) Answer: The transpose of T is the linear transformation T t: W ∗ → V ∗ sending a functional φ ∈ W ∗ to φ ◦ T ∈ V ∗. It has the same rank as T . (b) Let T be a linear operator on V . What is the definition of a T - invariant subspace of V ? What is the definition of the T -cyclic subspace generated by an element of V ? Answer: A T -invariant subspace of V is a linear subspace W such that T w ∈ W whenever w ∈ W . The T -cyclic subspace of V generated by v is the subspace of V spanned by the set of all T nv for n a natural number. (c) Let F := R, let V be the space of polynomials with coefficients in R, and let T : V → V be the operator sending f to xf 0, where f 0 is the derivative of f. Let W be the T -cyclic subspace of V generated by x2 + 1.
    [Show full text]
  • Cyclic Vectors and Invariant Subspaces for the Backward Shift Operator Annales De L’Institut Fourier, Tome 20, No 1 (1970), P
    ANNALES DE L’INSTITUT FOURIER R. G. DOUGLAS H. S. SHAPIRO A. L. SHIELDS Cyclic vectors and invariant subspaces for the backward shift operator Annales de l’institut Fourier, tome 20, no 1 (1970), p. 37-76 <http://www.numdam.org/item?id=AIF_1970__20_1_37_0> © Annales de l’institut Fourier, 1970, tous droits réservés. L’accès aux archives de la revue « Annales de l’institut Fourier » (http://annalif.ujf-grenoble.fr/) implique l’accord avec les conditions gé- nérales d’utilisation (http://www.numdam.org/conditions). Toute utilisa- tion commerciale ou impression systématique est constitutive d’une in- fraction pénale. Toute copie ou impression de ce fichier doit conte- nir la présente mention de copyright. Article numérisé dans le cadre du programme Numérisation de documents anciens mathématiques http://www.numdam.org/ Ann. Inst. Fourier, Grenoble 20,1 (1970), 37-76 CYCLIC VECTORS AND INVARIANT SUBSPACES FOR THE BACKWARD SHIFT OPERATOR (i) by R. G. DOUGLAS (2), H. S. SHAPIRO and A.L. SHIELDS 1. Introduction. Let T denote the unit circle and D the open unit disk in the complex plane. In [3] Beurling studied the closed invariant subspaces for the operator U which consists of multiplication by the coordinate function on the Hilbert space H2 = H^D). The operator U is called the forward (or right) shift, because the action of U is to transform a given function into one whose sequence of Taylor coefficients is shifted one unit to the right, that is, its action on sequences is U : (flo,^,^,...) ——>(0,flo,fli ,...). Strictly speaking, of course, the multiplication and the right shift operate on the distinct (isometric) Hilbert spaces H2 and /2.
    [Show full text]
  • Fm
    proceedings OF the AMERICAN MATHEMATICAL SOCIETY Volume 78, Number 1, January 1980 THE INACCESSIBLE INVARIANT SUBSPACES OF CERTAIN C0 OPERATORS JOHN DAUGHTRY Abstract. We extend the Douglas-Pearcy characterization of the inaccessi- ble invariant subspaces of an operator on a finite-dimensional Hubert space to the cases of algebraic operators and certain C0 operators on any Hubert space. This characterization shows that the inaccessible invariant subspaces for such an operator form a lattice. In contrast to D. Herrero's recent result on hyperinvariant subspaces, we show that quasisimilar operators in the classes under consideration have isomorphic lattices of inaccessible in- variant subspaces. Let H be a complex Hubert space. For T in B(H) (the space of bounded linear operators on H) the set of invariant subspaces for T is given the metric dist(M, N) = ||PM — PN|| where PM (PN) is the orthogonal projection on M (N) and "|| ||" denotes the norm in B(H). An invariant subspace M for T is "cyclic" if there exists x in M such that { T"x) spans M. R. G. Douglas and Carl Pearcy [3] have characterized the isolated invariant subspaces for T in the case of finite-dimensional H (see [9, Chapters 6 and 7], for the linear algebra used in this article): An invariant subspace M for T is isolated if and only if M n M, = {0} or M, for every noncyclic summand M, in the primary decomposition for T. In [1] we showed how to view this result as a sharpening of the previously known conditions for the isolation of a solution to a quadratic matrix equation.
    [Show full text]
  • Subspace Polynomials and Cyclic Subspace Codes,” Arxiv:1404.7739, 2014
    1 Subspace Polynomials and Cyclic Subspace Codes Eli Ben-Sasson† Tuvi Etzion∗ Ariel Gabizon† Netanel Raviv∗ Abstract Subspace codes have received an increasing interest recently due to their application in error-correction for random network coding. In particular, cyclic subspace codes are possible candidates for large codes with efficient encoding and decoding algorithms. In this paper we consider such cyclic codes and provide constructions of optimal codes for which their codewords do not have full orbits. We further introduce a new way to represent subspace codes by a class of polynomials called subspace polynomials. We present some constructions of such codes which are cyclic and analyze their parameters. I. INTRODUCTION F F∗ , F N F F Let q be the finite field of size q, and let q q \{0}. For n ∈ denote by qn the field extension of degree n of q which may be seen as the vector space of dimension n over Fq. By abuse of notation, we will not distinguish between these two concepts. Given a non-negative integer k ≤ n, the set of all k-dimensional subspaces of Fqn forms a Grassmannian space (Grassmannian in short) over Fq, which is denoted by Gq (n, k). The size of Gq (n, k) is given by the well-known Gaussian n coefficient . The set of all subspaces of F n is called the projective space of order over F [9] and is denoted by . k q q n q Pq(n) The set Pq(n) is endowed with the metric d(U, V ) = dim U + dim V − 2 dim(U ∩ V ).
    [Show full text]
  • The Number of Invariant Subspaces Under a Linear Operator on Finite Vector Spaces
    Advances in Mathematics of Communications Web site: http://www.aimSciences.org Volume 5, No. 2, 2011, 407–416 THE NUMBER OF INVARIANT SUBSPACES UNDER A LINEAR OPERATOR ON FINITE VECTOR SPACES This work is dedicated to Professor Adalbert Kerber in deep gratitude and adoration. Harald Fripertinger Institut f¨urMathematik Karl-Franzens Universit¨atGraz Heinrichstr. 36/4, A–8010 Graz, Austria (Communicated by Marcus Greferath) Abstract. Let V be an n-dimensional vector space over the finite field Fq and T a linear operator on V . For each k ∈ {1, . , n} we determine the number of k-dimensional T -invariant subspaces of V . Finally, this method is applied for the enumeration of all monomially nonisometric linear (n, k)-codes over Fq. 0. Introduction Let q be a power of a prime p, Fq the finite field with q elements and n a positive integer. Consider V an n-dimensional vector space over Fq, without loss n of generality V = Fq , and a linear operator T on V . A subspace U of V is called T -invariant if TU is contained in U. It is well known that the T -invariant subspaces of V form a lattice, the lattice L(T ) of T -invariant subspaces. Pn k We show how to determine the polynomial σ(T ) = k=0 σk(T )x ∈ Q[x], where σk(T ) is the number of k-dimensional, T -invariant subspaces of V . According to [2] the lattice L(T ) is self-dual, which means that the coefficients of σ(T ) satisfy σk(T ) = σn−k(T ) for 0 ≤ k ≤ n.
    [Show full text]
  • Cyclic Codes I Definition One of the Most Important Classes of Linear Codes Is the Class of Cyclic Codes
    Cyclic Codes I Definition One of the most important classes of linear codes is the class of cyclic codes. In general these codes are much easier to implement and hence have great practical importance. They are also of considerable interest from an algebraic point of view. Definition: A linear code C is a cyclic code if whenever (c c ...c c ) ∈ C then (c c c ...c ) ∈ C. 1 2 n-1 n n 1 2 n-1 In other words, C is a subspace and any cyclic shift of any vector in C is also in C. Examples (1) C = {(0000)} contained in V[4,2] is trivially a cyclic code. (2) C = {(0000), (1111)} contained in V[4,2] is also a cyclic code. (3) C = {(0000000), (1011100), (0101110), (0010111), (1110010), (0111001), (1001011), (1100101)} contained in V[7,2] is a non- trivial cyclic code. (4) C = {(0000), (1001), (1100), (0110), (0011), (0111), (1011), (1101)} contained in V[4,2] is not a cyclic code since every cyclic shift of (0111) is not present (in fact, this isn't even a subspace). (5) C = {(000), (210), (021), (102), (201), (012), (120), (222), (111)} contained in V[3,3] is a cyclic code of dimension 2. Questions There are several questions which we would like to answer. How can cyclic codes be constructed? For a given value of k, does a k-dimensional cyclic code in V[n,F] exist? How many cyclic codes does V[n,F] contain? Which vectors in a cyclic code have the property that the vector and its cyclic shifts will generate the entire code? Generators With respect to this last question, consider the 4-dimensional subspace C of V[6,2] generated by the vectors (111000), (011100), (001110) and (000111) (i.e.
    [Show full text]
  • Professor Carl Cowen Math 55400 Spring 17 NOTES on Connections
    Professor Carl Cowen Math 55400 Spring 17 NOTES on Connections between Polynomials, Matrices, and Vectors Throughout this document, V will be a finite dimensional vector space over the field F , u, v, w, etc., will be vectors in V, p, q, etc., will be polynomials in F [x], S, T , etc., will be linear transformations/operators acting on V and mapping into V, and A, B, C, etc., will be matrices with entries in F , but might also be considered the transformation on F n that has the given matrix as its associated matrix with respect to the usual basis for F n. The symbol I will represent the identity transformation or the identity matrix appropriate to the context. • Characteristic Polynomial: For A an n × n matrix, the characteristic polynomial of A is the polynomial, p, of degree n given by p(x) = det(xI − A). The monomial x − c is a factor of p if and only if c is an eigenvalue of A. More generally, if p is r1 r2 rk factored p = p1 p2 ··· pk where p1, p2, ··· , pk are distinct irreducible monic polynomials over F and r1, r2, ··· , rk are positive integers, then each rj is the r dimension of the null space of pj(T ) j . The Cayley-Hamilton Theorem says that if p is the characteristic polynomial of T , then p(T ) = 0. • Minimal Polynomial: The set fq 2 F [x]: q(A) = 0g includes the characteristic polynomial of A, so it is a non-empty set, and it is easy to see that it is an ideal in F [x].
    [Show full text]
  • MATH 320 NOTES 4 5.4 Invariant Subspaces and Cayley-Hamilton
    MATH 320 NOTES 4 5.4 Invariant Subspaces and Cayley-Hamilton theorem The goal of this section is to prove the Cayley-Hamilton theorem: Theorem 1. Let T : V ! V be a linear operator, V finite dimensional, and let f(t) be the characteristic polynomial of T . Then f(T ) = T0 i.e. the zero linear transformation. In other words T is a root of its own characteristic polynomial. n n−1 Here, if f(t) = ant + an−1t + ::: + a1t + a0, plugging in T means the transformation n n−1 f(T ) = anT + an−1T + ::: + a1T + a0I Let us give some simple examples: Example 1 The identity I : F 3 ! F 3 has characteristic polynomial f(t) = 3 3 (1 − t) . Then f(I) = (I − I) = T0. 01 0 51 Example 2 Let A = @0 1 0A. Then the characteristic polynomial is 0 0 2 00 0 512 01 0 51 2 2 f(t) = (1−t) (2−t), and f(A) = (A−I) (2I3−A) = @0 0 0A @0 1 0A = 0 0 1 0 0 0 O. We will prove the main theorem by using invariant subspaces and showing that if W is T -invariant, then the characteristic polynomial of T W divides the characteristic polynomial of T . So, let us recall the definition of a T - invariant space: Definition 2. Given a linear transformation T : V ! V , a subspace W ⊂ V is called T -invariant if for all x 2 W , T (x) 2 W . For such a W , let TW : W ! W denote the linear transformation obtained by restricting T to W i.e.
    [Show full text]
  • The Cyclic Decomposition Theorem
    The cyclic decomposition theorem Attila M´at´e Brooklyn College of the City University of New York December 1, 2014; revised: April 18, 2016 Contents 1 Polynomials over a field 2 2 Polynomials over the rationals 4 3 Conductor polynomial 5 3.1 Minimalpolynomial ............................... ..... 7 4 Admissible subspaces and the Cyclic Decomposition Theorem 7 5 Decomposition of cyclic spaces 11 6 The characteristic polynomial 12 7 Decompositionintoirreducibleinvariantsubspaces 15 7.1 Algebraicallyclosedfields . ......... 16 8 Matrices 16 8.1 Representation of vector spaces and linear transformations............... 16 8.2 Similaritytransformations . .......... 17 8.3 Directsumsofmatrices ............................ ...... 17 8.4 Thecompanionmatrixofapolynomial. ......... 18 8.5 Thecharacteristicpolynomial ofamatrix . ............ 19 8.6 Jordanblockmatrices ............................. ...... 19 8.7 TheJordancanonicalformofamatrix . ......... 20 8.8 The characteristic polynomial of a matrix for algebraicallyclosedfields. 20 8.9 The invariance of the minimal polynomial of a matrix underfieldextensions . 21 9 Functions of linear transformations 22 9.1 Nilpotenttransformations . ......... 23 9.2 Thesquarerootofalineartransformation . ............ 24 9.3 Normedvectorspaces .............................. ..... 26 9.4 Theexponentialfunction . ....... 27 9.5 Thesquarerootofamatrix . ...... 27 9.6 Theexponentialofamatrix . ....... 28 9.6.1 Examplesformatrixexponentials. ........ 28 1 9.6.2 A matrix representation of complex numbers . ......... 29 1 Polynomials over a field Lemma 1.1 (Division theorem). Let M(x) and D(x) be polynomials over the field F . Assume D(x) is not zero. Then there are polynomials Q(x) and R(x) such that M(x)= D(x)Q(x)+ R(x) and deg R(x) < deg D(x); here we take the degree of the zero polynomial to be 1. − This an be easily proved by induction on the degree of M(x). The usual algorithm of dividing polynomials can be used to find the polynomials Q(x) and R(x).
    [Show full text]
  • Thoughts on Invariant Subspaces for Operators on Hilbert Spaces
    Thoughts on Invariant Subspaces for Operators on Hilbert Spaces Carl C. Cowen IUPUI Central Michigan U. Colloquium, 20 October 2016 Thoughts on Invariant Subspaces for Operators on Hilbert Spaces Carl C. Cowen and Eva Gallardo Guti´errez Thanks to: Plan Nacional I+D grant no. MTM2010-16679. Speaker thanks the Departamento An´alisisMatem´atico, Univ. Complutense de Madrid for hospitality during academic year 2012-13 and also thanks IUPUI for a sabbatical for that year that made this work possible. Some Recent History: Eva Gallardo and I announced on December 13, 2012 that we had proved the Invariant Subspace Theorem. Some Recent History: Eva Gallardo and I announced on December 13, 2012 that we had proved the Invariant Subspace Theorem. On January 26, 2013, we learned that we had not proved the Theorem. Some Recent History: Eva Gallardo and I announced on December 13, 2012 that we had proved the Invariant Subspace Theorem. On January 26, 2013, we learned that we had not proved the Theorem. On June 12, 2013, we submitted a paper including the main ideas of the earlier paper, And on August 14, 2013 we submitted another paper. Some Recent History: Eva Gallardo and I announced on December 13, 2012 that we had proved the Invariant Subspace Theorem. On January 26, 2013, we learned that we had not proved the Theorem. On June 12, 2013, we submitted a paper including the main ideas of the earlier paper, And on August 14, 2013 we submitted another paper. Today, I'll talk about some of the history of the problem and some of the results of these papers.
    [Show full text]
  • 1 Vector Spaces
    1 Vector Spaces Definition 1.1 (Vector Space). Let V be a set, called the vectors, and F be a field, called the scalars. Let + be a binary operation on V with respect to which V is a commutative Group. Let ·:F × V ! V be an operation called scalar multiplication such that (8c1; c2 2 F )(8x; y 2 V ) 1. (c1c2)x = c1(c2x) 2. (c1 + c2)x = c1x + c2x 3. c1(x + y) = c1x + c1y 4. 1 · x = x Definition 1.2 (Characteristic of a field). The smallest positive integer k (if any exists) such that 1 + 1 + ::: + 1 = 0 with k 1's. If no such k exists, then we say the field has characteristic 0. Definition 1.3 (Subspaces). Let V be a vector space over a field F and let W ⊆ V . W is a subspace if W itself is a vector space under the same field F and the same operations. There are two sets of tests to see if W is a subspace of V . The First set of tests is: 1. W 6= ; 2. W is closed under addition 3. W is closed under scalar multiplication Alternatively 1. ~0 2 W 2. W is closed under linear combinations Note: A subspace is also closed under subtraction. Theorem 1.1 (The Intersection Property). The intersection of subspaces of a vector space is itself a subspace. Theorem 1.2 (Intersection Theorem). Let V be a vector space over a field F. Let fWλg be a nonempty family of subspaces of V. Then, the intersection \ T = Wλ λ2Λ is also a subspace of V.
    [Show full text]