PROGRESS on the INVARIANT SUBSPACE PROBLEM Contents 1

Total Page:16

File Type:pdf, Size:1020Kb

PROGRESS on the INVARIANT SUBSPACE PROBLEM Contents 1 PROGRESS ON THE INVARIANT SUBSPACE PROBLEM BEN GOLDMAN Contents 1. Introduction 2 1.1.FiniteDimensions 2 2. FundamentalsofFunctionalAnalysis 3 2.1. Banach Spaces 3 2.2. The Open Mapping Theorem 4 2.3. Compact Operators 5 3. SpectralTheoryandLinearMethods 7 3.1. Elementary Functional Calculus 7 3.2. Simple Invariant Subspace Case 8 3.3. Gelfand’s Spectral Radius Formula 9 3.4. Hilden’s Method 10 4. Lomonosov’sProofandNonlinearMethods 11 4.1. Schauder’s Theorem 11 4.2. Lomonosov’s Method 13 5. The Counterexample 14 5.1. Preliminaries 14 5.2.ConstructingtheNorm 16 5.3.TheRemainingLemmas 17 5.4. The Proof 21 6. Acknowledgements 24 References 24 Date: August, 2020. 1 2 BEN GOLDMAN 1. Introduction An important tool in working on any abstract space is to search for invariants, notions which do not change after a shift. The most elementary of these is the fixed point, an element which remains stationary after a transformation of space. The study of linear operators then introduces an invariant equipped with nonzero dimensionality, the eigenvector. In this paper, we endeavor to study how these simple principles can be extended to a more abstract notion, the invariant subspace. We begin with some very elementary, motivating examples. 1.1. Finite Dimensions. Let V,W be complex vector spaces. Definition 1.1.1. Recall that an operator, T V W ,islinearifforallv,µ V and ↵,β R we have that T ↵v βµ ↵T ∶ v →βT µ ∈ ∈ Definition 1.1.2. Recall that, v,isaneigenvectorofalinearoperator,T ,witheigen- ( + ) = ( ) + ( ) value, λ C,ifT v λv. Example 1. Consider the linear transformation, T 2 2,givenbyT x, y ∈ ( ) = R R 2x, 3y .Thishaseigenvectors,1, 0 and 0, 1 with real eigenvalues, 2 and 3, respectively. ∶ → ( ) = ( ) ( ) ( ) Example 2. Now, consider the linear transformation, T R2 R2,whichrotatesthe plane by some angle, ✓.ItisfairlyeasytoobservethatT has no real eigenvalues. However, one observes that the unit disk, D v R2 ∶ v → 1 ,isinvariantinT (i.e. T D D). This motivates our definition, below: = { ∈ = } Definition 1.1.3. A subspace, W V ,isinvariantinalinearoperator,T V V ,if ( ) = T W W . ⊆ ∶ → Remark. Granted, not all invariant subspaces are created equal. For instance, the ( ) ⊆ origin in V is invariant under any linear transformation, but this does not tell us anything significant about the operator nor the space. Similarly, for any surjective linear operator, T V V ,wehavethatT V V ,whichisanotherunimportant example. We dub these the trivial invariant subspaces for an operator. ∶ → ( ) = Definition 1.1.4. The orbit of a vector, v V ,withrespecttoalinearoperator, T V V ,isgivenby ∈ O v,Tv,T 2v,T 3v,T 4v,...,T nv,... ∶ → v One notes that O is also an invariant subspace, but a fairly disappointing one. v = { } Remark. In order to avoid the aforementioned simple cases, we state the Invariant Subspace Problem: Under which conditions does T have a nontrivial, closed invariant subspace? PROGRESS ON THE INVARIANT SUBSPACE PROBLEM 3 It is fairly easy to prove this for the case of a finite dimensional complex vector space. Theorem 1.1.5. Any nonzero operator on a finite dimensional, complex vector space, V , admits an eigenvector. Proof. [A16] Let n dim V and suppose T V V is a nonzero linear oper- n ator. Select nonzero v V so that by finite dimensionality, we know that Ov v,Tv,T 2v,...,T nv =must( be) a linearly dependent∶ set.→ Thus, ∈ = ↵ T nv ↵ T n 1v ...↵ Tv ↵ v ⌘ T λ T λ ... T λ v 0 { n n 1 } 1 0 1 2 n − where we use λ − to denote λ I (the identity map). At least one of these factors + i C + i + = ( − )( − ) ( − ) = must have a nontrivial kernel which suffices to prove the theorem. One can adapt the proof above to∈ real valued vector spaces, but the irreducible polynomials end up being quadratic instead. ⇤ Corollary 1.1.6. Any nonzero operator on a finite dimensional, complex vector space, V , admits a closed, nontrivial invariant subspace. 2. Fundamentals of Functional Analysis 2.1. Banach Spaces. Definition 2.1.1. Recall that a Banach Space is a complete Vector Space with norm, . Lemma 2.1.2 (Riesz). Let x E, x 1 be the unit sphere in an infinite dimensional Banach Space, E, . .IfY E is a closed proper subspace of E then there exists x x " suchB = that{ ∈ = } ∗ ∗ ( ) 1 ⊂ = ( ) ∈ B y x y Y," 0 ∗ 1 " Proof. Pick an arbitrary x − so> that d ∀ inf∈ > x y . Select y so that + y Y ∗ x y d d"0 d d" and set ∈ ∈ B = − ∗ 1 1 − = + <x + ,x x y x y ∗ ∗ x y ∗ d d"0 ∗ We then have that ∈ B = ∗ ( − ) = ( − ) − + 1 1 1 x y x y d d"0 y inf x yˆ yˆ Y ∗ d d"0 ∗ d d"0 1 " wherey ˆ y − d =d" y − − ( + ) > ∈ ( − ) > 0+ + + ⇤ ∗ Theorem= 2.1.3+ ( (Riesz)+ ) . If E, . is an infinite dimensional banach space then the unit sphere, , is not compact. ( ) B 4 BEN GOLDMAN Proof. Pick x1 and set Y1 span x1 . Apply the previous lemma to yield x2 1 such that infy Y1 x2 y 2 and let Y2 span x1,x2 .Continuethisprocess ∈ B = { } inductively to∈ yield a sequence, xn and Yn ,suchthatxn has no convergent subsequence. − > = { } ⇤ { } { } This stands in stark contrast to finite dimensional vector spaces which have compact unit spheres. We conclude this subsection with a fact about linear operators on Banach Spaces. Definition 2.1.4. We define a norm on the space of linear operators between Banach spaces, T X, . X Y, . Y ,by ∶ ( ) → ( ) T sup T x Y x One notes that there is not necessarily = a∈B vector, ( )x0 ,forwhich T x0 T due to the loss of compactness. ∈ B ( ) = Theorem 2.1.5. If T X, . X Y, . Y is a linear map between Banach Spaces then it is continuous if and only if T ∶ ( ) → ( ) " Proof. If T then for all " 0if <x+∞y X T then < +∞ > −1 < T x T y Y x y X T x y T x y X " x y X Y ( ) − ( ) = − ( − ) ≤ − < We will use the manipulation above i.e. −T x Y T x X quite often. If T is 1 unbounded then for each n N select xn X such that xn X n and T xn Y n. ( ) ≤ This produces a sequence, xn 0suchthat T xn Y 1foralln. And thus, lim supx 0 T x Y 1, and∈ so T is not continuous∈ at the origin. = ( ) > ⇤ → ( ) ≥ → ( ) ≥ 2.2. The Open Mapping Theorem. Theorem 2.2.1 (Baire Category Theorem). Complete metric spaces are Baire spaces. More precisely, if X,⇢ is a complete metric space and n n N is a countable col- lection of open dense subsets then the intersection, , is also dense. [R10] n N n∈ ( ) {O } ∈ We require the above theorem from non-functionalI analysis= O in order to justify the following results. Theorem 2.2.2 (Open Mapping Theorem). If T X, . X Y, . Y is a surjec- tive, continuous linear map then T is an open map. ∶ ( ) → ( ) PROGRESS ON THE INVARIANT SUBSPACE PROBLEM 5 Proof. [R10] Let BX BX 0, 1 be the unit ball in X and BY be the unit ball in Y .ObservethatX r N rBX and that Y r N T rBX .BytheBaireCategory = ( ) Theorem, there exists R∈ N such that T RBX ∈ T BX 0,R has nonempty inte- rior. = ( ) = ( ) ∈ ( ) = ( ( )) Pick BY y0,r0 T BX 0,R so that for all y BY we have that y0 r0y BY y0,r0 . Thus, ( ) ⊆ ( r0(y ))T RBX T RBX∈ T 2RBX + ∈ ( ) and we thus have that B T R B where R 2R r . Y∈ ( X) − ( ) ⊆ ( 0 ) ∗ ∗ For any y Y and " 0,⊆ we( find a) point x =R BX such that T x y ". R We can refine this approach to get a stronger result.∗ Select x1 B 0, for which 2∗ ∈ 1 > R ∈ ( ) ( 1) − < Tx1 y and xn B 0, n such that T x1 x2 xn y n .Letting 2 2 ∗ ∈ 2 x n N xn where x n N xn 2R gives that T x y.Thus,wehave 1 shown ∗ − that < B T 2R∈∗B and so B 0, ∗ ( T+ B +.⋅⋅⋅+∗ ) − < ∈ Y X ∈ Y 2R X = ∑ ∗ ≤ ∑ ≤ ∗ ( ) = Finally, one shows⊆ ( that T )is open. Since T is ⊆ linear,( it) suffices to prove that the image of the unit ball is open (other open sets are achieved by translations/scalings). Take y T BX and select x BX such that Tx y.Pick" 0suchthat BX x, " BX and pick r sufficiently small so that BY y,r T BX x, " so that B y,r ∈T B( X .) ∈ = > ⇤ ( ) ⊆ ( ) ⊆ ( ( )) We finish this subsection with a handful of useful statements that are immediate ( ) ⊆ ( ) consequences of the Open Mapping Theorem. Corollary 2.2.3. If T X, . X Y, . Y is a bijective, continuous operator between two banach spaces then T 1 is continuous. [R10] ∶ ( −) → ( ) Corollary 2.2.4. If . 1 and . 2 are two norms on a Banach space, X, such that . 1 C . 2 then they are equivalent. [R10] ≤ 2.3. Compact Operators. Notation 2.3.1. Let X be a norm space. We call X the space of continuous linear operators mapping a space, X, into itself. If T X then T X X. L( ) For obvious reasons, we will be concentrating our e↵orts on these types of operators ∈ L( ) ∶ → for the remainder of the paper. The loss of compactness in Banach spaces makes it difficult to study these linear operators to the same degree as one could in a Euclidean setting. However, we define a special class of operators which has very nice properties (and that we will see, later on, satisfies the invariant subspace problem).
Recommended publications
  • Solution to Homework 5
    Solution to Homework 5 Sec. 5.4 1 0 2. (e) No. Note that A = is in W , but 0 2 0 1 1 0 0 2 T (A) = = 1 0 0 2 1 0 is not in W . Hence, W is not a T -invariant subspace of V . 3. Note that T : V ! V is a linear operator on V . To check that W is a T -invariant subspace of V , we need to know if T (w) 2 W for any w 2 W . (a) Since we have T (0) = 0 2 f0g and T (v) 2 V; so both of f0g and V to be T -invariant subspaces of V . (b) Note that 0 2 N(T ). For any u 2 N(T ), we have T (u) = 0 2 N(T ): Hence, N(T ) is a T -invariant subspace of V . For any v 2 R(T ), as R(T ) ⊂ V , we have v 2 V . So, by definition, T (v) 2 R(T ): Hence, R(T ) is also a T -invariant subspace of V . (c) Note that for any v 2 Eλ, λv is a scalar multiple of v, so λv 2 Eλ as Eλ is a subspace. So we have T (v) = λv 2 Eλ: Hence, Eλ is a T -invariant subspace of V . 4. For any w in W , we know that T (w) is in W as W is a T -invariant subspace of V . Then, by induction, we know that T k(w) is also in W for any k. k Suppose g(T ) = akT + ··· + a1T + a0, we have k g(T )(w) = akT (w) + ··· + a1T (w) + a0(w) 2 W because it is just a linear combination of elements in W .
    [Show full text]
  • Spectral Coupling for Hermitian Matrices
    Spectral coupling for Hermitian matrices Franc¸oise Chatelin and M. Monserrat Rincon-Camacho Technical Report TR/PA/16/240 Publications of the Parallel Algorithms Team http://www.cerfacs.fr/algor/publications/ SPECTRAL COUPLING FOR HERMITIAN MATRICES FRANC¸OISE CHATELIN (1);(2) AND M. MONSERRAT RINCON-CAMACHO (1) Cerfacs Tech. Rep. TR/PA/16/240 Abstract. The report presents the information processing that can be performed by a general hermitian matrix when two of its distinct eigenvalues are coupled, such as λ < λ0, instead of λ+λ0 considering only one eigenvalue as traditional spectral theory does. Setting a = 2 = 0 and λ0 λ 6 e = 2− > 0, the information is delivered in geometric form, both metric and trigonometric, associated with various right-angled triangles exhibiting optimality properties quantified as ratios or product of a and e. The potential optimisation has a triple nature which offers two j j e possibilities: in the case λλ0 > 0 they are characterised by a and a e and in the case λλ0 < 0 a j j j j by j j and a e. This nature is revealed by a key generalisation to indefinite matrices over R or e j j C of Gustafson's operator trigonometry. Keywords: Spectral coupling, indefinite symmetric or hermitian matrix, spectral plane, invariant plane, catchvector, antieigenvector, midvector, local optimisation, Euler equation, balance equation, torus in 3D, angle between complex lines. 1. Spectral coupling 1.1. Introduction. In the work we present below, we focus our attention on the coupling of any two distinct real eigenvalues λ < λ0 of a general hermitian or symmetric matrix A, a coupling called spectral coupling.
    [Show full text]
  • Section 18.1-2. in the Next 2-3 Lectures We Will Have a Lightning Introduction to Representations of finite Groups
    Section 18.1-2. In the next 2-3 lectures we will have a lightning introduction to representations of finite groups. For any vector space V over a field F , denote by L(V ) the algebra of all linear operators on V , and by GL(V ) the group of invertible linear operators. Note that if dim(V ) = n, then L(V ) = Matn(F ) is isomorphic to the algebra of n×n matrices over F , and GL(V ) = GLn(F ) to its multiplicative group. Definition 1. A linear representation of a set X in a vector space V is a map φ: X ! L(V ), where L(V ) is the set of linear operators on V , the space V is called the space of the rep- resentation. We will often denote the representation of X by (V; φ) or simply by V if the homomorphism φ is clear from the context. If X has any additional structures, we require that the map φ is a homomorphism. For example, a linear representation of a group G is a homomorphism φ: G ! GL(V ). Definition 2. A morphism of representations φ: X ! L(V ) and : X ! L(U) is a linear map T : V ! U, such that (x) ◦ T = T ◦ φ(x) for all x 2 X. In other words, T makes the following diagram commutative φ(x) V / V T T (x) U / U An invertible morphism of two representation is called an isomorphism, and two representa- tions are called isomorphic (or equivalent) if there exists an isomorphism between them. Example. (1) A representation of a one-element set in a vector space V is simply a linear operator on V .
    [Show full text]
  • Adiabatic Invariants of the Linear Hamiltonian Systems with Periodic Coefficients
    JOURNAL OF DIFFERENTIAL EQUATIONS 42, 47-71 (1981) Adiabatic Invariants of the Linear Hamiltonian Systems with Periodic Coefficients MARK LEVI Department of Mathematics, Duke University, Durham, North Carolina 27706 Received November 19, 1980 0. HISTORICAL REMARKS In 1911 Lorentz and Einstein had offered an explanation of the fact that the ratio of energy to the frequency of radiation of an atom remains constant. During the long time intervals separating two quantum jumps an atom is exposed to varying surrounding electromagnetic fields which should supposedly change the ratio. Their explanation was based on the fact that the surrounding field varies extremely slowly with respect to the frequency of oscillations of the atom. The idea can be illstrated by a slightly simpler model-a pendulum (instead of Bohr’s model of an atom) with a slowly changing length (slowly with respect to the frequency of oscillations of the pendulum). As it turns out, the ratio of energy of the pendulum to its frequency changes very little if its length varies slowly enough from one constant value to another; the pendulum “remembers” the ratio, and the slower the change the better the memory. I had learned this interesting historical remark from Wasow [15]. Those parameters of a system which remain nearly constant during a slow change of a system were named by the physicists the adiabatic invariants, the word “adiabatic” indicating that one parameter changes slowly with respect to another-e.g., the length of the pendulum with respect to the phaze of oscillations. Precise definition of the concept will be given in Section 1.
    [Show full text]
  • A Criterion for the Existence of Common Invariant Subspaces Of
    Linear Algebra and its Applications 322 (2001) 51–59 www.elsevier.com/locate/laa A criterion for the existence of common invariant subspaces of matricesୋ Michael Tsatsomeros∗ Department of Mathematics and Statistics, University of Regina, Regina, Saskatchewan, Canada S4S 0A2 Received 4 May 1999; accepted 22 June 2000 Submitted by R.A. Brualdi Abstract It is shown that square matrices A and B have a common invariant subspace W of di- mension k > 1 if and only if for some scalar s, A + sI and B + sI are invertible and their kth compounds have a common eigenvector, which is a Grassmann representative for W.The applicability of this criterion and its ability to yield a basis for the common invariant subspace are investigated. © 2001 Elsevier Science Inc. All rights reserved. AMS classification: 15A75; 47A15 Keywords: Invariant subspace; Compound matrix; Decomposable vector; Grassmann space 1. Introduction The lattice of invariant subspaces of a matrix is a well-studied concept with sev- eral applications. In particular, there is substantial information known about matrices having the same invariant subspaces, or having chains of common invariant subspac- es (see [7]). The question of existence of a non-trivial invariant subspace common to two or more matrices is also of interest. It arises, among other places, in the study of irreducibility of decomposable positive linear maps on operator algebras [3]. It also comes up in what might be called Burnside’s theorem, namely that if two square ୋ Research partially supported by NSERC research grant. ∗ Tel.: +306-585-4771; fax: +306-585-4020. E-mail address: [email protected] (M.
    [Show full text]
  • The Semi-Simplicity Manifold of Arbitrary Operators
    THE SEMI-SIMPLICITY MANIFOLD OF ARBITRARY OPERATORS BY SHMUEL KANTOROVITZ(i) Introduction. In finite dimensional complex Euclidian space, any linear operator has a unique maximal invariant subspace on which it is semi-simple. It is easily described by means of the Jordan decomposition theorem. The main result of this paper (Theorem 2.1) is a generalization of this fact to infinite di- mensional reflexive Banach space, for arbitrary bounded operators T with real spectrum. Our description of the "semi-simplicity manifold" for T is entirely analytic and basis-free, and seems therefore to be a quite natural candidate for such generalizations to infinite dimension. A similar generalization to infinite dimension of the concepts of Jordan cells and Weyr characteristic will be presented elsewhere. The theory is motivated and illustrated by examples in §3. Notations. The following notations are fixed throughout this paper, and will be used without further explanation. R: the field of real numbers. C : the field of complex numbers. âd : the sigma-algebra of all Borel subsets of R. C(R) : the space of all continuous complex valued functions on R. L"(R) : the usual Lebesgue spaces on R (p = 1). j : integration over R. /: the Fourier transform of / e L1(R). X : an arbitrary complex Banach space. X*: the conjugate of X. B(X) : the Banach algebra of all bounded linear operators acting on X. I : the identity operator on X. | • | : the original norms on X, X* and B(X) (new norms, to be defined, will be denoted by double bars). | • \y, | • |œ: the L\R) andL™^) norms (respectively).
    [Show full text]
  • The Common Invariant Subspace Problem
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Linear Algebra and its Applications 384 (2004) 1–7 www.elsevier.com/locate/laa The common invariant subspace problem: an approach via Gröbner bases Donu Arapura a, Chris Peterson b,∗ aDepartment of Mathematics, Purdue University, West Lafayette, IN 47907, USA bDepartment of Mathematics, Colorado State University, Fort Collins, CO 80523, USA Received 24 March 2003; accepted 17 November 2003 Submitted by R.A. Brualdi Abstract Let A be an n × n matrix. It is a relatively simple process to construct a homogeneous ideal (generated by quadrics) whose associated projective variety parametrizes the one-dimensional invariant subspaces of A. Given a finite collection of n × n matrices, one can similarly con- struct a homogeneous ideal (again generated by quadrics) whose associated projective variety parametrizes the one-dimensional subspaces which are invariant subspaces for every member of the collection. Gröbner basis techniques then provide a finite, rational algorithm to deter- mine how many points are on this variety. In other words, a finite, rational algorithm is given to determine both the existence and quantity of common one-dimensional invariant subspaces to a set of matrices. This is then extended, for each d, to an algorithm to determine both the existence and quantity of common d-dimensional invariant subspaces to a set of matrices. © 2004 Published by Elsevier Inc. Keywords: Eigenvector; Invariant subspace; Grassmann variety; Gröbner basis; Algorithm 1. Introduction Let A and B be two n × n matrices. In 1984, Dan Shemesh gave a very nice procedure for constructing the maximal A, B invariant subspace, N,onwhichA and B commute.
    [Show full text]
  • Some Open Problems in the Theory of Subnormal Operators
    Holomorphic Spaces MSRI Publications Volume 33, 1998 Some Open Problems in the Theory of Subnormal Operators JOHN B. CONWAY AND LIMING YANG Abstract. Subnormal operators arise naturally in complex function the- ory, differential geometry, potential theory, and approximation theory, and their study has rich applications in many areas of applied sciences as well as in pure mathematics. We discuss here some research problems concern- ing the structure of such operators: subnormal operators with finite-rank self-commutator, connections with quadrature domains, invariant subspace structure, and some approximation problems related to the theory. We also present some possible approaches for the solution of these problems. Introduction A bounded linear operator S on a separable Hilbert space H is called subnor- mal if there exists a normal operator N on a Hilbert space K containing H such that NH ⊂ H and NjH = S. The operator S is called cyclic if there exists an x in H such that H = closfp(S)x : p is a polynomialg; and is called rationally cyclic if there exists an x such that H = closfr(S)x : r is a rational function with poles off σ(S)g: The operator S is pure if S has no normal summand and is irreducible if S is not unitarily equivalent to a direct sum of two nonzero operators. The theory of subnormal operators provides rich applications in many areas, since many natural operators that arise in complex function theory, differential geometry, potential theory, and approximation theory are subnormal operators. Many deep results have been obtained since Halmos introduced the concept of a subnormal operator.
    [Show full text]
  • Representations
    Representations 1 Basic definitions If V is a k-vector space, we denote by Aut V the group of k-linear iso- morphisms F : V ! V and by End V the k-vector space of k-linear maps n ∼ ∼ F : V ! V . Thus, if V = k , then Aut V = GL(n; k) and End V = Mn(k). In general, if V is a finite dimensional vector space of dimension n, then a choice of basis defines a group isomorphism Aut V =∼ GL(n; k) and a vector ∼ space End V = Mn(k). From now on, unless otherwise stated, k = C, i.e all vector spaces are C-vector spaces, all linear maps are C-linear, and all vector subspaces are closed under scalar multiplication by C. Definition 1.1. Let G be a group. A representation of G on V is a homomorphism ρ: G ! Aut V , where V is a finite dimensional C-vector space. Equivalently, for all g 2 G, ρ(g): V ! V is a linear map satisfying: ρ(g)(ρ(h)(v)) = ρ(gh)(v) and ρ(1) = Id, i.e. a representation is equivalent to an action of G on V by linear maps. The degree deg ρ is by definition dim V . Finally, a choice of basis of V identifies ρ with a homomorphism (also denoted ρ) from G to GL(n; C). Changing the basis replaces ρ by T ρT −1, where T is an invertible matrix. We will usually abbreviate the data of the representation ρ: G ! Aut V by ρ, or frequently by V , with the understanding that the vector space V includes the data of the homomorphism ρ or of the G-action.
    [Show full text]
  • Discrete Symmetries of Dynamics 869
    APPENDIX A7. DISCRETE SYMMETRIES OF DYNAMICS 869 1. If g , g G, then g g G. 1 2 ∈ 2 ◦ 1 ∈ 2. The group multiplication is associative: g (g g ) = (g g ) g . 3 ◦ 2 ◦ 1 3 ◦ 2 ◦ 1 3. The group G contains identity element e such that g e = e g = g for every ◦ ◦ element g G. ∈ 4. For every element g G, there exists a unique h == g 1 G such that Appendix A7 ∈ − ∈ h g = g h = e. ◦ ◦ A finite group is a group with a finite number of elements Discrete symmetries of dynamics G = e, g2,..., g G , { | |} where G , the number of elements, is the order of the group. | | Example A7.1 Finite groups: Some finite groups that frequently arise in asic group-theoretic notions are recapitulated here: groups, irreducible rep- applications: resentations, invariants. Our notation follows birdtracks.eu. Cn (also denoted Zn): the cyclic group of order n. B • Dn: the dihedral group of order 2n, rotations and reflections in plane that preserve The key result is the construction of projection operators from invariant ma- • a regular n-gon. trices. The basic idea is simple: a hermitian matrix can be diagonalized. If this S n: the symmetric group of all permutations of n symbols, order n!. matrix is an invariant matrix, it decomposes the reps of the group into direct sums • of lower-dimensional reps. Most of computations to follow implement the spectral decomposition Example A7.2 Cyclic and dihedral groups: The cyclic group Cn SO(2) of order ⊂ M = λ P + λ P + + λrPr , n is generated by one element.
    [Show full text]
  • ON the INVARIANT SUBSPACE PROBLEM 1. Introduction
    ON THE INVARIANT SUBSPACE PROBLEM M. SABABHEH1, A. YOUSEF2 AND R. KHALIL3 Abstract. In an attempt to solve the Invariant Subspace Problem, we intro- duce a certain orthonormal basis of Hilbert spaces, and prove that a bounded linear operator on a Hilbert space must have an invariant subspace once this basis fulfills certain conditions. Ultimately, this basis is used to show that ev- ery bounded linear operator on a Hilbert space is the sum of a shift and an upper triangular operators, each of which having an invariant subspace. 1. Introduction Let H be a complex Hilbert space, and L(H; H) be the space of all bounded linear operators acting on H. A closed subspace W of H is called invariant or T - invariant, of T 2 L(H; H), if T (W ) ⊂ W . The invariant subspace problem is the long standing simple question:"Does every bounded linear operator T 2 L(H; H) have a non-trivial invariant subspace?". Non-trivial subspace means a closed subspace different from f0g and H. This problem is open for more than half a century, and mathematicians have been trying to attack this problem producing many significant contributions with a huge variety of techniques, making this challenging problem a target. However the solution seems to be nowhere in sight. It is unknown who first posed this question. It appeared in 1949-1950, years after Von Neumann's unpublished result (1930's) , in which he proved that every compact operator T on a Hilbert space has nontrivial invariant subspace. In this paper we shall discuss, briefly, the most popular known results, and introduce our new technique toward solving this problem.
    [Show full text]
  • 5. Solutions to Exercise 5 Exercise 5.1. Suppose V Is Finite-Dimensional. Then Each Operator on V Has at Most Dim V Distinct E
    24 5. Solutions to Exercise 5 Exercise 5.1. Suppose V is finite-dimensional. Then each operator on V has at most dim V distinct eigenvalues. Solution 5.1. Let T 2 L(V ). Suppose λ1; : : : ; λm are distinct eigenvalues of T . Let v1; : : : ; vm be corresponding eigenvectors. Then fv1; : : : ; vmg is linearly independent. Thus m ≤ dim V . Exercise 5.2. (1) Suppose S; T 2 L(V ) are such that ST = TS. Prove that Nul(S) is invariant under T . Solution 5.2. For any v 2 Nul(S), S(v) = 0. Since ST = TS, S(T (v)) = T (S(v)) = T (0) = 0. Then T (v) 2 Nul(S). Then Nul(S) is invariant under T . (2) Suppose S; T 2 L(V ) are such that ST = TS. Prove that im(S) is invariant under T . Solution 5.2. For any v 2 im(S), there exists w 2 V such that S(w) = v. Since ST = TS, T (v) = T (S(w)) = S(T (w)). Then T (v) 2 im(S). Then im(S) is invariant under T . Exercise 5.3. See the proof of Theorem 5.4.1. Let v 2 V . Let f[v1];:::; [vk]g be a basis V / Span(v). Please show that fv; v1; : : : ; vkg is linearly independent using the definition of linearly independence. Solution 5.3. Set up the equation in V : cv + c1v1 + ::: + ckvk = 0: (**) Then c1v1 + ::: + ckvk = −cv 2 Span(v). Then [c1v1 + ::: + ckvk] = 0 2 V / Span(v). So c1[v1] + ::: + ck[vk] = [c1v1 + ::: + ckvk] = 0: Since f[v1];:::; [vk]g is a basis of V / Span(v), c1 = ::: = ck = 0.
    [Show full text]