1 Inner Products and Norms

Total Page:16

File Type:pdf, Size:1020Kb

1 Inner Products and Norms ORF 523 Lecture 2 Spring 2017, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Thursday, February 9, 2017 When in doubt on the accuracy of these notes, please cross check with the instructor's notes, on aaa. princeton. edu/ orf523 . Any typos should be emailed to [email protected]. Today, we review basic math concepts that you will need throughout the course. • Inner products and norms • Positive semidefinite matrices • Basic differential calculus 1 Inner products and norms 1.1 Inner products 1.1.1 Definition Definition 1 (Inner product). A function h:; :i : Rn × Rn ! R is an inner product if 1. hx; xi ≥ 0; hx; xi = 0 , x = 0 (positivity) 2. hx; yi = hy; xi (symmetry) 3. hx + y; zi = hx; zi + hy; zi (additivity) 4. hrx; yi = rhx; yi for all r 2 R (homogeneity) Homogeneity in the second argument follows: hx; ryi = hry; xi = rhy; xi = rhx; yi using properties (2) and (4) and again (2) respectively, and hx; y + zi = hy + z; xi = hy; xi + hz; xi = hx; yi + hx; zi using properties (2), (3) and again (2). 1 1.1.2 Examples • The standard inner product is T X n hx; yi = x y = xiyi; x; y 2 R : • The standard inner product between matrices is T X X hX; Y i = Tr(X Y ) = XijYij i j where X; Y 2 Rm×n. Notation: Here, Rm×n is the space of real m × n matrices. Tr(Z) is the trace of a real square P matrix Z, i.e., Tr(Z) = i Zii. Note: The matrix inner product is the same as our original inner product between two vectors of length mn obtained by stacking the columns of the two matrices. • A less classical example in R2 is the following: hx; yi = 5x1y1 + 8x2y2 − 6x1y2 − 6x2y1 Properties (2), (3) and (4) are obvious, positivity is less obvious. It can be seen by writing 2 2 2 2 hx; xi = 5x1 + 8x2 − 12x1x2 = (x1 − 2x2) + (2x1 − 2x2) ≥ 0 hx; xi = 0 , x1 − 2x2 = 0 and 2x1 − 2x2 = 0 , x1 = 0 and x2 = 0: 1.1.3 Properties of inner products Definition 2 (Orthogonality). We say that x and y are orthogonal if hx; yi = 0: Theorem 1 (Cauchy Schwarz). For x; y 2 Rn jhx; yij ≤ jjxjj jjyjj; where jjxjj := phx; xi is the length of x (it is also a norm as we will show later on). 2 Proof: First, assume that jjxjj = jjyjj = 1. jjx − yjj2 ≥ 0 ) hx − y; x − yi = hx; xi + hy; yi − 2hx; yi ≥ 0 ) hx; yi ≤ 1: Now, consider any x; y 2 Rn. If one of the vectors is zero, the inequality is trivially verified. If they are both nonzero, then: x y ; ≤ 1 ) hx; yi ≤ jjxjj · jjyjj: (1) jjxjj jjyjj Since (1) holds 8x; y, replace y with −y: hx; −yi ≤ jjxjj · jj − yjj hx; −yi ≥ −||xjj · jjyjj using properties (1) and (2) respectively. 1.2 Norms 1.2.1 Definition Definition 3 (Norm). A function f : Rn ! R is a norm if 1. f(x) ≥ 0, f(x) = 0 , x = 0 (positivity) 2. f(αx) = jαjf(x); 8α 2 R (homogeneity) 3. f(x + y) ≤ f(x) + f(y) (triangle inequality) Examples: pP 2 • The 2-norm: jjxjj = i xi P • The 1-norm: jjxjj1 = i jxij • The inf-norm: jjxjj1 = maxi jxij P p 1=p • The p-norm: jjxjjp = ( i jxij ) , p ≥ 1 Lemma 1. Take any inner product h:; :i and define f(x) = phx; xi. Then f is a norm. 3 Proof: Positivity follows from the definition. For homogeneity, f(αx) = phαx; αxi = jαjphx; xi We prove triangular inequality by contradiction. If it is not satisfied, then 9x; y s.t. phx + y; x + yi > phx; xi + phy; yi ) hx + y; x + yi > hx; xi + 2phx; xihy; yi + hy; yi ) 2hx; yi > 2phx; xihy; yi which contradicts Cauchy-Schwarz. Note: Not every norm comes from an inner product. 1.2.2 Matrix norms Matrix norms are functions f : Rm×n ! R that satisfy the same properties as vector norms. Let A 2 Rm×n. Here are a few examples of matrix norms: p T qP 2 • The Frobenius norm: jjAjjF = Tr(A A) = i;j Ai;j P • The sum-absolute-value norm: jjAjjsav = i;j jXi;jj • The max-absolute-value norm: jjAjjmav = maxi;j jAi;jj Definition 4 (Operator norm). An operator (or induced) matrix norm is a norm m×n jj:jja;b : R ! R defined as jjAjja;b = max jjAxjja x s.t. jjxjjb ≤ 1; m n where jj:jja is a vector norm on R and jj:jjb is a vector norm on R . Notation: When the same vector norm is used in both spaces, we write jjAjjc = max jjAxjjc s.t. jjxjjc ≤ 1: Examples: 4 p T • jjAjj2 = λmax(A A), where λmax denotes the largest eigenvalue. P • jjAjj1 = maxj i jAijj, i.e., the maximum column sum. P • jjAjj1 = maxi j jAijj, i.e., the maximum row sum. Notice that not all matrix norms are induced norms. An example is the Frobenius norm p given above as jjIjj∗ = 1 for any induced norm, but jjIjjF = n. Lemma 2. Every induced norm is submultiplicative, i.e., jjABjj ≤ jjAjj jjBjj: Proof: We first show that jjAxjj ≤ jjAjj jjxjj. Suppose that this is not the case, then jjAxjj > jjAjj jxjj 1 ) jjAxjj > jjAjj jjxjj x ) A > jjAjj jjxjj x but jjxjj is a vector of unit norm. This contradicts the definition of jjAjj. Now we proceed to prove the claim. jjABjj = max jjABxjj ≤ max jjAjj jjBxjj = jjAjj max jjBxjj = jjAjj jjBjj: jjx||≤1 jjx||≤1 jjx||≤1 Remark: This is only true for induced norms that use the same vector norm in both spaces. In the case where the vector norms are different, submultiplicativity can fail to hold. Consider e.g., the induced norm jj · jj1;2; and the matrices " p p # " # 2=2 2=2 1 0 A = p p and B = : − 2=2 2=2 1 0 In this case, jjABjj1;2 > jjAjj1;2 · jjBjj1;2: Indeed, the image of the unit circle by A (notice that A is a rotation matrix of angle π=4) stays within the unit square, and so jjAjj1;2 ≤ 1. Using similar reasoning, jjBjj1;2 ≤ 1. 5 p p This implies that jjAjj1;2jjBjj1;2 ≤ 1. However, jjABjj1;2 ≥ 2, as jjABxjj1 = 2 for x = (1; 0)T . Example of a norm that is not submultiplicative: jjAjjmav = max jAi;jj i;j This can be seen as any submultiplicative norm satisfies jjA2jj ≤ jjAjj2: In this case, ! ! 1 1 2 2 A = and A2 = 1 1 2 2 2 2 So jjA jjmav = 2 > 1 = jjAjjmav. Remark: Not all submultiplicative norms are induced norms. An example is the Frobenius norm. 1.2.3 Dual norms Definition 5 (Dual norm). Let jj:jj be any norm. Its dual norm is defined as T jjxjj∗ = max x y s.t. jjyjj ≤ 1: You can think of this as the operator norm of xT . The dual norm is indeed a norm. The first two properties are straightforward to prove. The triangle inequality can be shown in the following way: T T T T jjx + zjj∗ = max(x y + z y) ≤ max x y + max z y = jjxjj∗ + jjzjj∗ jjy||≤1 jjy||≤1 jjy||≤1 Examples: 1. jjxjj1∗ = jjxjj1 6 2. jjxjj2∗ = jjxjj2 3. jjxjj∞∗ = jjxjj1. Proofs: • The proof of (1) is left as an exercise. • Proof of (2): We have T jjxjj2∗ = max x y y s.t. jjyjj2 ≤ 1: Cauchy-Schwarz implies that xT y ≤ jjxjj jjyjj ≤ jjxjj x and y = jjxjj achieves this bound. • Proof of (3): We have T jjxjj∞∗ = max x y y s.t. jjyjj1 ≤ 1 So yopt = sign(x) and the optimal value is jjxjj1. 2 Positive semidefinite matrices We denote by Sn×n the set of all symmetric (real) n × n matrices. 2.1 Definition Definition 6. A matrix A 2 Sn×n is • positive semidefinite (psd) (notation: A 0) if T n x Ax ≥ 0; 8x 2 R : • positive definite (pd) (notation: A 0) if T n x Ax > 0; 8x 2 R ; x 6= 0: 7 • negative semidefinite if −A is psd. (Notation: A 0) • negative definite if −A is pd. (Notation: A ≺ 0.) Notation: A 0 means A is psd; A ≥ 0 means that Aij ≥ 0, for all i; j. Remark: Whenever we consider a quadratic form xT Ax, we can assume without loss of generality that the matrix A is symmetric. The reason behind this is that any matrix A can be written as A + AT A − AT A = + 2 2 A+AT A−AT where B := 2 is the symmetric part of A and C := 2 is the anti-symmetric part of A. Notice that xT Cx = 0 for any x 2 Rn. Example: The matrix ! 5 1 M = 1 −2 is indefinite. To see this, consider x = (1; 0)T and x = (0; 1)T : 2.2 Eigenvalues of positive semidefinite matrices Theorem 2. The eigenvalues of a symmetric real-valued matrix A are real. Proof: Let x 2 Cn be a nonzero eigenvector of A and let λ 2 C be the corresponding eigenvalue; i.e., Ax = λx. By multiplying either side of the equality by the conjugate transpose x∗ of eigenvector x, we obtain x∗Ax = λx∗x; (2) We now take the conjugate of both sides, remembering that A 2 Sn×n : x∗AT x = λx¯ ∗x ) x∗Ax = λx¯ ∗x (3) Combining (2) and (3), we get λx∗x = λx¯ ∗x ) x∗x(λ − λ¯) = 0 ) λ = λ,¯ since x 6= 0.
Recommended publications
  • C H a P T E R § Methods Related to the Norm Al Equations
    ¡ ¢ £ ¤ ¥ ¦ § ¨ © © © © ¨ ©! "# $% &('*),+-)(./+-)0.21434576/),+98,:%;<)>=,'41/? @%3*)BAC:D8E+%=B891GF*),+H;I?H1KJ2.21K8L1GMNAPO45Q5R)S;S+T? =RU ?H1*)>./+EAVOKAPM ;<),5 ?H1G;P8W.XAVO4575R)S;S+Y? =Z8L1*)/[%\Z1*)]AB3*=,'Q;<)B=,'41/? @%3*)RAP8LU F*)BA(;S'*)Z)>@%3/? F*.4U ),1G;^U ?H1*)>./+ APOKAP;_),5a`Rbc`^dfeg`Rbih,jk=>.4U U )Blm;B'*)n1K84+o5R.EUp)B@%3*.K;q? 8L1KA>[r\0:D;_),1Ejp;B'/? A2.4s4s/+t8/.,=,' b ? A7.KFK84? l4)>lu?H1vs,+D.*=S;q? =>)m6/)>=>.43KA<)W;B'*)w=B8E)IxW=K? ),1G;W5R.K;S+Y? yn` `z? AW5Q3*=,'|{}84+-A_) =B8L1*l9? ;I? 891*)>lX;B'*.41Q`7[r~k8K{i)IF*),+NjL;B'*)71K8E+o5R.4U4)>@%3*.G;I? 891KA0.4s4s/+t8.*=,'w5R.BOw6/)Z.,l4)IM @%3*.K;_)7?H17A_8L5R)QAI? ;S3*.K;q? 8L1KA>[p1*l4)>)>lj9;B'*),+-)Q./+-)Q)SF*),1.Es4s4U ? =>.K;q? 8L1KA(?H1Q{R'/? =,'W? ;R? A s/+-)S:Y),+o+D)Blm;_8|;B'*)W3KAB3*.4Urc+HO4U 8,F|AS346KABs*.,=B)Q;_)>=,'41/? @%3*)BAB[&('/? A]=*'*.EsG;_),+}=B8*F*),+tA7? ;PM ),+-.K;q? F*)75R)S;S'K8ElAi{^'/? =,'Z./+-)R)K? ;B'*),+pl9?+D)B=S;BU OR84+k?H5Qs4U ? =K? ;BU O2+-),U .G;<)Bl7;_82;S'*)Q1K84+o5R.EU )>@%3*.G;I? 891KA>[ mWX(fQ - uK In order to solve the linear system 7 when is nonsymmetric, we can solve the equivalent system b b [¥K¦ ¡¢ £N¤ which is Symmetric Positive Definite. This system is known as the system of the normal equations associated with the least-squares problem, [ ­¦ £N¤ minimize §* R¨©7ª§*«9¬ Note that (8.1) is typically used to solve the least-squares problem (8.2) for over- ®°¯± ±²³® determined systems, i.e., when is a rectangular matrix of size , .
    [Show full text]
  • Surjective Isometries on Grassmann Spaces 11
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Repository of the Academy's Library SURJECTIVE ISOMETRIES ON GRASSMANN SPACES FERNANDA BOTELHO, JAMES JAMISON, AND LAJOS MOLNAR´ Abstract. Let H be a complex Hilbert space, n a given positive integer and let Pn(H) be the set of all projections on H with rank n. Under the condition dim H ≥ 4n, we describe the surjective isometries of Pn(H) with respect to the gap metric (the metric induced by the operator norm). 1. Introduction The study of isometries of function spaces or operator algebras is an important research area in functional analysis with history dating back to the 1930's. The most classical results in the area are the Banach-Stone theorem describing the surjective linear isometries between Banach spaces of complex- valued continuous functions on compact Hausdorff spaces and its noncommutative extension for general C∗-algebras due to Kadison. This has the surprising and remarkable consequence that the surjective linear isometries between C∗-algebras are closely related to algebra isomorphisms. More precisely, every such surjective linear isometry is a Jordan *-isomorphism followed by multiplication by a fixed unitary element. We point out that in the aforementioned fundamental results the underlying structures are algebras and the isometries are assumed to be linear. However, descriptions of isometries of non-linear structures also play important roles in several areas. We only mention one example which is closely related with the subject matter of this paper. This is the famous Wigner's theorem that describes the structure of quantum mechanical symmetry transformations and plays a fundamental role in the probabilistic aspects of quantum theory.
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Introduction Into Quaternions for Spacecraft Attitude Representation
    Introduction into quaternions for spacecraft attitude representation Dipl. -Ing. Karsten Groÿekatthöfer, Dr. -Ing. Zizung Yoon Technical University of Berlin Department of Astronautics and Aeronautics Berlin, Germany May 31, 2012 Abstract The purpose of this paper is to provide a straight-forward and practical introduction to quaternion operation and calculation for rigid-body attitude representation. Therefore the basic quaternion denition as well as transformation rules and conversion rules to or from other attitude representation parameters are summarized. The quaternion computation rules are supported by practical examples to make each step comprehensible. 1 Introduction Quaternions are widely used as attitude represenation parameter of rigid bodies such as space- crafts. This is due to the fact that quaternion inherently come along with some advantages such as no singularity and computationally less intense compared to other attitude parameters such as Euler angles or a direction cosine matrix. Mainly, quaternions are used to • Parameterize a spacecraft's attitude with respect to reference coordinate system, • Propagate the attitude from one moment to the next by integrating the spacecraft equa- tions of motion, • Perform a coordinate transformation: e.g. calculate a vector in body xed frame from a (by measurement) known vector in inertial frame. However, dierent references use several notations and rules to represent and handle attitude in terms of quaternions, which might be confusing for newcomers [5], [4]. Therefore this article gives a straight-forward and clearly notated introduction into the subject of quaternions for attitude representation. The attitude of a spacecraft is its rotational orientation in space relative to a dened reference coordinate system.
    [Show full text]
  • The Nonstandard Theory of Topological Vector Spaces
    TRANSACTIONS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 172, October 1972 THE NONSTANDARDTHEORY OF TOPOLOGICAL VECTOR SPACES BY C. WARD HENSON AND L. C. MOORE, JR. ABSTRACT. In this paper the nonstandard theory of topological vector spaces is developed, with three main objectives: (1) creation of the basic nonstandard concepts and tools; (2) use of these tools to give nonstandard treatments of some major standard theorems ; (3) construction of the nonstandard hull of an arbitrary topological vector space, and the beginning of the study of the class of spaces which tesults. Introduction. Let Ml be a set theoretical structure and let *JR be an enlarge- ment of M. Let (E, 0) be a topological vector space in M. §§1 and 2 of this paper are devoted to the elementary nonstandard theory of (F, 0). In particular, in §1 the concept of 0-finiteness for elements of *E is introduced and the nonstandard hull of (E, 0) (relative to *3R) is defined. §2 introduces the concept of 0-bounded- ness for elements of *E. In §5 the elementary nonstandard theory of locally convex spaces is developed by investigating the mapping in *JK which corresponds to a given pairing. In §§6 and 7 we make use of this theory by providing nonstandard treatments of two aspects of the existing standard theory. In §6, Luxemburg's characterization of the pre-nearstandard elements of *E for a normed space (E, p) is extended to Hausdorff locally convex spaces (E, 8). This characterization is used to prove the theorem of Grothendieck which gives a criterion for the completeness of a Hausdorff locally convex space.
    [Show full text]
  • Math 217: Multilinearity of Determinants Professor Karen Smith (C)2015 UM Math Dept Licensed Under a Creative Commons By-NC-SA 4.0 International License
    Math 217: Multilinearity of Determinants Professor Karen Smith (c)2015 UM Math Dept licensed under a Creative Commons By-NC-SA 4.0 International License. A. Let V −!T V be a linear transformation where V has dimension n. 1. What is meant by the determinant of T ? Why is this well-defined? Solution note: The determinant of T is the determinant of the B-matrix of T , for any basis B of V . Since all B-matrices of T are similar, and similar matrices have the same determinant, this is well-defined—it doesn't depend on which basis we pick. 2. Define the rank of T . Solution note: The rank of T is the dimension of the image. 3. Explain why T is an isomorphism if and only if det T is not zero. Solution note: T is an isomorphism if and only if [T ]B is invertible (for any choice of basis B), which happens if and only if det T 6= 0. 3 4. Now let V = R and let T be rotation around the axis L (a line through the origin) by an 21 0 0 3 3 angle θ. Find a basis for R in which the matrix of ρ is 40 cosθ −sinθ5 : Use this to 0 sinθ cosθ compute the determinant of T . Is T othogonal? Solution note: Let v be any vector spanning L and let u1; u2 be an orthonormal basis ? for V = L . Rotation fixes ~v, which means the B-matrix in the basis (v; u1; u2) has 213 first column 405.
    [Show full text]
  • Determinants Math 122 Calculus III D Joyce, Fall 2012
    Determinants Math 122 Calculus III D Joyce, Fall 2012 What they are. A determinant is a value associated to a square array of numbers, that square array being called a square matrix. For example, here are determinants of a general 2 × 2 matrix and a general 3 × 3 matrix. a b = ad − bc: c d a b c d e f = aei + bfg + cdh − ceg − afh − bdi: g h i The determinant of a matrix A is usually denoted jAj or det (A). You can think of the rows of the determinant as being vectors. For the 3×3 matrix above, the vectors are u = (a; b; c), v = (d; e; f), and w = (g; h; i). Then the determinant is a value associated to n vectors in Rn. There's a general definition for n×n determinants. It's a particular signed sum of products of n entries in the matrix where each product is of one entry in each row and column. The two ways you can choose one entry in each row and column of the 2 × 2 matrix give you the two products ad and bc. There are six ways of chosing one entry in each row and column in a 3 × 3 matrix, and generally, there are n! ways in an n × n matrix. Thus, the determinant of a 4 × 4 matrix is the signed sum of 24, which is 4!, terms. In this general definition, half the terms are taken positively and half negatively. In class, we briefly saw how the signs are determined by permutations.
    [Show full text]
  • Matrices and Tensors
    APPENDIX MATRICES AND TENSORS A.1. INTRODUCTION AND RATIONALE The purpose of this appendix is to present the notation and most of the mathematical tech- niques that are used in the body of the text. The audience is assumed to have been through sev- eral years of college-level mathematics, which included the differential and integral calculus, differential equations, functions of several variables, partial derivatives, and an introduction to linear algebra. Matrices are reviewed briefly, and determinants, vectors, and tensors of order two are described. The application of this linear algebra to material that appears in under- graduate engineering courses on mechanics is illustrated by discussions of concepts like the area and mass moments of inertia, Mohr’s circles, and the vector cross and triple scalar prod- ucts. The notation, as far as possible, will be a matrix notation that is easily entered into exist- ing symbolic computational programs like Maple, Mathematica, Matlab, and Mathcad. The desire to represent the components of three-dimensional fourth-order tensors that appear in anisotropic elasticity as the components of six-dimensional second-order tensors and thus rep- resent these components in matrices of tensor components in six dimensions leads to the non- traditional part of this appendix. This is also one of the nontraditional aspects in the text of the book, but a minor one. This is described in §A.11, along with the rationale for this approach. A.2. DEFINITION OF SQUARE, COLUMN, AND ROW MATRICES An r-by-c matrix, M, is a rectangular array of numbers consisting of r rows and c columns: ¯MM..
    [Show full text]
  • Functional Analysis 1 Winter Semester 2013-14
    Functional analysis 1 Winter semester 2013-14 1. Topological vector spaces Basic notions. Notation. (a) The symbol F stands for the set of all reals or for the set of all complex numbers. (b) Let (X; τ) be a topological space and x 2 X. An open set G containing x is called neigh- borhood of x. We denote τ(x) = fG 2 τ; x 2 Gg. Definition. Suppose that τ is a topology on a vector space X over F such that • (X; τ) is T1, i.e., fxg is a closed set for every x 2 X, and • the vector space operations are continuous with respect to τ, i.e., +: X × X ! X and ·: F × X ! X are continuous. Under these conditions, τ is said to be a vector topology on X and (X; +; ·; τ) is a topological vector space (TVS). Remark. Let X be a TVS. (a) For every a 2 X the mapping x 7! x + a is a homeomorphism of X onto X. (b) For every λ 2 F n f0g the mapping x 7! λx is a homeomorphism of X onto X. Definition. Let X be a vector space over F. We say that A ⊂ X is • balanced if for every α 2 F, jαj ≤ 1, we have αA ⊂ A, • absorbing if for every x 2 X there exists t 2 R; t > 0; such that x 2 tA, • symmetric if A = −A. Definition. Let X be a TVS and A ⊂ X. We say that A is bounded if for every V 2 τ(0) there exists s > 0 such that for every t > s we have A ⊂ tV .
    [Show full text]
  • HYPERCYCLIC SUBSPACES in FRÉCHET SPACES 1. Introduction
    PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 134, Number 7, Pages 1955–1961 S 0002-9939(05)08242-0 Article electronically published on December 16, 2005 HYPERCYCLIC SUBSPACES IN FRECHET´ SPACES L. BERNAL-GONZALEZ´ (Communicated by N. Tomczak-Jaegermann) Dedicated to the memory of Professor Miguel de Guzm´an, who died in April 2004 Abstract. In this note, we show that every infinite-dimensional separable Fr´echet space admitting a continuous norm supports an operator for which there is an infinite-dimensional closed subspace consisting, except for zero, of hypercyclic vectors. The family of such operators is even dense in the space of bounded operators when endowed with the strong operator topology. This completes the earlier work of several authors. 1. Introduction and notation Throughout this paper, the following standard notation will be used: N is the set of positive integers, R is the real line, and C is the complex plane. The symbols (mk), (nk) will stand for strictly increasing sequences in N.IfX, Y are (Hausdorff) topological vector spaces (TVSs) over the same field K = R or C,thenL(X, Y ) will denote the space of continuous linear mappings from X into Y , while L(X)isthe class of operators on X,thatis,L(X)=L(X, X). The strong operator topology (SOT) in L(X) is the one where the convergence is defined as pointwise convergence at every x ∈ X. A sequence (Tn) ⊂ L(X, Y )issaidtobeuniversal or hypercyclic provided there exists some vector x0 ∈ X—called hypercyclic for the sequence (Tn)—such that its orbit {Tnx0 : n ∈ N} under (Tn)isdenseinY .
    [Show full text]
  • The Column and Row Hilbert Operator Spaces
    The Column and Row Hilbert Operator Spaces Roy M. Araiza Department of Mathematics Purdue University Abstract Given a Hilbert space H we present the construction and some properties of the column and row Hilbert operator spaces Hc and Hr, respectively. The main proof of the survey is showing that CB(Hc; Kc) is completely isometric to B(H ; K ); in other words, the completely bounded maps be- tween the corresponding column Hilbert operator spaces is completely isometric to the bounded operators between the Hilbert spaces. A similar theorem for the row Hilbert operator spaces follows via duality of the operator spaces. In particular there exists a natural duality between the column and row Hilbert operator spaces which we also show. The presentation and proofs are due to Effros and Ruan [1]. 1 The Column Hilbert Operator Space Given a Hilbert space H , we define the column isometry C : H −! B(C; H ) given by ξ 7! C(ξ)α := αξ; α 2 C: Then given any n 2 N; we define the nth-matrix norm on Mn(H ); by (n) kξkc;n = C (ξ) ; ξ 2 Mn(H ): These are indeed operator space matrix norms by Ruan's Representation theorem and thus we define the column Hilbert operator space as n o Hc = H ; k·kc;n : n2N It follows that given ξ 2 Mn(H ) we have that the adjoint operator of C(ξ) is given by ∗ C(ξ) : H −! C; ζ 7! (ζj ξ) : Suppose C(ξ)∗(ζ) = d; c 2 C: Then we see that cd = (cj C(ξ)∗(ζ)) = (cξj ζ) = c(ζj ξ) = (cj (ζj ξ)) : We are also able to compute the matrix norms on rectangular matrices.
    [Show full text]
  • 1 Lecture 09
    Notes for Functional Analysis Wang Zuoqin (typed by Xiyu Zhai) September 29, 2015 1 Lecture 09 1.1 Equicontinuity First let's recall the conception of \equicontinuity" for family of functions that we learned in classical analysis: A family of continuous functions defined on a region Ω, Λ = ffαg ⊂ C(Ω); is called an equicontinuous family if 8 > 0; 9δ > 0 such that for any x1; x2 2 Ω with jx1 − x2j < δ; and for any fα 2 Λ, we have jfα(x1) − fα(x2)j < . This conception of equicontinuity can be easily generalized to maps between topological vector spaces. For simplicity we only consider linear maps between topological vector space , in which case the continuity (and in fact the uniform continuity) at an arbitrary point is reduced to the continuity at 0. Definition 1.1. Let X; Y be topological vector spaces, and Λ be a family of continuous linear operators. We say Λ is an equicontinuous family if for any neighborhood V of 0 in Y , there is a neighborhood U of 0 in X, such that L(U) ⊂ V for all L 2 Λ. Remark 1.2. Equivalently, this means if x1; x2 2 X and x1 − x2 2 U, then L(x1) − L(x2) 2 V for all L 2 Λ. Remark 1.3. If Λ = fLg contains only one continuous linear operator, then it is always equicontinuous. 1 If Λ is an equicontinuous family, then each L 2 Λ is continuous and thus bounded. In fact, this boundedness is uniform: Proposition 1.4. Let X,Y be topological vector spaces, and Λ be an equicontinuous family of linear operators from X to Y .
    [Show full text]