MTL 411: Functional Analysis

Total Page:16

File Type:pdf, Size:1020Kb

MTL 411: Functional Analysis MTL 411: Functional Analysis Lecture A: Adjoint operators Recall, Riesz representation theorem: Suppose f is a continuous linear functional on a Hilbert space H. Then there exists a unique z in H such that f(x) = hx; zi for all x 2 H: As a consequence, for a given bounded linear operator, we can construct an associated bounded linear operator, which is called Hilbert-adjoint operator or simply adjoint operator. 1 Adjoint operator Let H1 and H2 be (Real/complex) Hilbert spaces and T : H1 ! H2 be a bounded linear ∗ ∗ operator. Then an adjoint operator T of T is an operator T : H2 ! H1 satisfies ∗ hT x; yi = hx; T yi ; 8x 2 H1; 8y 2 H2: Remark. In the above equation, the left hand side inner product is between elements in H2, that is, the inner product is from H2 and in the right hand side the inner product is from H1. ∗ Questions. Given T 2 B(H1;H2); does an adjoint operator T exist? If it exists, will it be unique and bounded operator? The answers are affirmative. ∗ Theorem 1.1. Given T 2 B(H1;H2), there exists a unique operator T 2 B(H2;H1) such that ∗ hT x; yi = hx; T yi ; 8x 2 H1; 8y 2 H2 and jjT jj = jjT ∗jj: Proof. Fix y 2 H2: Define fy(x) = hT x; yi, x 2 H1: Since T is bounded linear operator and using Cauchy-Schwarz inequality (CSI), we can show that fy is a continuous linear functional on H1, that is, fy is linear and jfy(x)j ≤ jjT jj jjyjj jjxjj; 8x 2 H1: Then by Riesz representation theorem, there exists a unique z 2 H1 such that fy(x) = hx; zi. Therefore for each y 2 H2; there exists a unique z 2 H1 such that fy(x) = hx; zi = hT x; yi. ∗ Now, we define T y = z; for each y 2 H2: Thus ∗ hx; T yi = hT x; yi ; 8x 2 H1; 8y 2 H2: (1.1) The uniqueness of T ∗ follows from the Riesz representation theorem or from a simple relation that h(T1 − T2)y; xi = 0 for all y 2 H2; x 2 H1 implies T1 − T2 = 0: Claim: T ∗ is linear. Consider y; z 2 H2 and α 2 K(R or C), we get hx; T ∗(αy + z)i = hT x; αy + zi = α hT x; yi + hT x; zi = α hx; T ∗yi + hx; T ∗zi ∗ ∗ = hx; αT y + T zi ; 8x 2 H1: 1 ∗ ∗ ∗ Hence T (αy + z) = αT y + T z; 8y; z 2 H2; 8α 2 K: Claim: T ∗ is bounded and jjT jj = jjT ∗jj: ∗ Since the equation (1.1) is satisfied for all elements in H1 and H2, choose x = T y and using CSI, we get ∗ 2 ∗ jjT yjj ≤ jjT jj jjT yjj jjyjj; 8y 2 H2: ∗ =) jjT yjj ≤ jjT jj jjyjj; 8y 2 H2: Therefore, T ∗ is bounded and jjT ∗jj ≤ jjT jj. By a similar argument using (1.1), we can show that jjT jj ≤ jjT ∗jj: Remark. If we have an operator S such that hx; Syi = hT x; yi ; 8x 2 H1; 8y 2 H2; then by uniqueness T ∗ = S: Examples. n n T T T T T T • A : R ! R , x 7! Ax. Then hAx; yi = (Ax) y = (x A )y = x (A y) = x; A y : Therefore, the adjoint A∗ is the transpose AT of A. n n T T T T T D E • B : C ! C , x 7! Bx. Then hBx; yi = (Bx) y = (x B )y = x (B y) = x; BT y : Therefore, the adjoint B∗ is the conjugate-transpose BT of B. • The integral operator T : L2[a; b] ! L2[a; b] defined by b Z T f(x) = k(x; t)f(t)dt; x 2 [a; b] a where k(x; t) is a real-valued continuous function on [a; b] × [a; b]: Then it is easy to show that T is a bounded linear operator. Indeed, b b Z Z jT f(x)j2 ≤ jk(x; t)j2dt jf(t)j2dt (using CSI in continuous variable) a a 1 1 1 0 b 1 2 0 b b 1 2 0 b 1 2 Z Z Z Z 2 2 2 @ jT f(x)j dxA ≤ @ jk(x; t)j dtdxA @ jf(t)j dtA : a a a a Now we calculate T ∗ : b Z hT f; gi = T f(x)g(x)dx a b 0 b 1 Z Z = @ k(x; t)f(t)dtA g(x)dx a a b 0 b 1 Z Z = @ k(x; t)g(x)dxA f(t)dt a a b 0 b 1 Z Z = f(t)@ k(x; t)g(x)dxAdt: a a b Z Therefore, T ∗g(y) = k(s; y)g(s)ds; y 2 [a; b]: a 2 Exercises 1. Find the adjoint of right shift operator: 2 T (x1; x2;:::) = (0; x1; x2;:::); x = (x1; x2;:::) 2 ` : 2. Find the adjoint of multiplication operator: Mf(x) = g(x)f(x); f 2 L2[a; b] where g(x) is a bounded function on [a; b]. Next we will discuss a simple lemma which will be useful in several occasions. Lemma 1.2. (Zero operator) Let X and Y be inner product spaces and T : X ! Y be a bounded linear operator. Then (a) T = 0 () hT x; yi = 0 for all x 2 X and y 2 Y: (b) If T : X ! X, where X is complex, and hT x; xi = 0 for all x 2 X; then T = 0: Proof. (a) The proof is trivial. (b) Consider hT (αx + y); αx + yi = 0 (1.2) α hT y; xi + α hT x; yi = 0; 8α 2 C8x; y 2 X: (1.3) Choose α = 1 and α = i, we get hT x; yi = 0; 8x; y 2 X: Then T = 0: Remark. In the above lemma, if X is real inner product space then the statment (b) is not 2 true. (Hint. Use rotation operator in R .) Properties of the adjoint operator Theorem 1.3. Let T : H1 ! H2 be a bounded linear operator. Then 1. (T ∗)∗ = T 2. jjTT ∗jj = jjT ∗T jj = jjT jj2 3. N (T ) = R(T ∗)? 4. N (T )? = R(T ∗) 5. N (T ∗) = R(T )? 6. N (T ∗)? = R(T ) Here N (T ) and R(T ) denote the null space and range space of T . ∗ ∗ ∗ Proof. 1. Consider h(T ) x; yi = hx; T yi = hT x; yi ; for all x 2 H1, y 2 H2. ∗ ∗ Then h((T ) − T )x; yi = 0, for all x 2 H1, y 2 H2. 3 2. Recall that the norm of composition operators jjSHjj ≤ jjSjj jjHjj: Henceforth, we get jjT ∗T jj ≤ jjT ∗jj jjT jj = jjT jj2: For the other inequality, consider ! ! jjT xjj jjT xjj jjT jj2 = sup sup 06=x2H1 jjxjj 06=x2H1 jjxjj jjT xjj2 hT x; T xi = sup 2 = sup 2 06=x2H1 jjxjj 06=x2H1 jjxjj hT ∗T x; xi = sup 2 06=x2H1 jjxjj ∗ jjT T jjjjxjjjjxjj ∗ ≤ sup 2 = jjT T jj: 06=x2H1 jjxjj ∗ 3. Suppose x 2 N (T ); then T x = 0: Thus hT y; xi = hy; T xi = 0; for all y 2 H2: Therefore, x 2 R(T ∗)?. Similarly, it is easy to show that R(T ∗)? ⊂ N (T ). ∗ ∗ 4. For z 2 R(T ); there exists a y 2 H2 such that T y = z. Then hz; xi = hT ∗y; xi = hy; T xi = 0 for all x 2 N (T ): Therefore, z ?N (T ): This implies that R(T ∗) ⊂ N (T )?, and hence R(T ∗) ⊂ N (T )? (why?). Claim. R(T ∗) = N (T )?: Suppose this is not true, that is, R(T ∗) is a proper closed subspace of N (T )?: Then by ? ∗ projection theorem, there exists a 0 6= x0 2 N (T ) and x0 ? R(T ): In particular, ∗ hx0;T yi = 0; 8y 2 H2 =) hT x0; yi = 0; 8y 2 H2 =) T x0 = 0; ? =) x0 2 N (T ) \N (T ) Therefore, x0 = 0 which is contradiction to x0 6= 0, hence the claim is proved. The other parts are exercise. Exercises Let H1;H2 and H3 be Hilbert spaces. Then show that ∗ ∗ ∗ 1.( TS) = S T for S 2 B(H1;H2);T 2 B(H2;H3) ∗ ∗ ∗ 2.( αT + βS) = αT + βS for S; T 2 B(H1;H2); and α; β 2 K: 3. T ∗T = 0 () T = 0: 4. Construct a bounded linear operator on `2 whose range is not closed. Hint: T (x ; x ;:::) = (x ; px2 ; px3 ;:::), (x ) 2 `2 . 1 2 1 2 3 n 5. Let H be a Hilbert space and T : H ! H be a bijective bounded linear operator whose inverse is bounded. Show that (T ∗)−1 exists and (T ∗)−1 = [T −1]∗: 6. Let T1 and T2 be bounded linear operators on a complex Hilbert space H into itself. If hT1x; xi = hT2x; xi for all x 2 H, show that T1 = T2. 7. Let S = I +T ∗T : H ! H, where T is linear and bounded. Show that S−1 : S(H) ! H exists. 4 Self-adjoint, Normal and Unitary operators Definition 1.4. A bounded linear operator T : H ! H on a Hilbert space H is said to be self-adjoint if T = T ∗; unitary if T is bijective and T ∗ = T −1; normal if T ∗T = TT ∗: Remarks. • T is unitary iff T ∗T = TT ∗ = I: • Unitary, Self-adjoint =) Normal. The converse is not true.
Recommended publications
  • 18.102 Introduction to Functional Analysis Spring 2009
    MIT OpenCourseWare http://ocw.mit.edu 18.102 Introduction to Functional Analysis Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 108 LECTURE NOTES FOR 18.102, SPRING 2009 Lecture 19. Thursday, April 16 I am heading towards the spectral theory of self-adjoint compact operators. This is rather similar to the spectral theory of self-adjoint matrices and has many useful applications. There is a very effective spectral theory of general bounded but self- adjoint operators but I do not expect to have time to do this. There is also a pretty satisfactory spectral theory of non-selfadjoint compact operators, which it is more likely I will get to. There is no satisfactory spectral theory for general non-compact and non-self-adjoint operators as you can easily see from examples (such as the shift operator). In some sense compact operators are ‘small’ and rather like finite rank operators. If you accept this, then you will want to say that an operator such as (19.1) Id −K; K 2 K(H) is ‘big’. We are quite interested in this operator because of spectral theory. To say that λ 2 C is an eigenvalue of K is to say that there is a non-trivial solution of (19.2) Ku − λu = 0 where non-trivial means other than than the solution u = 0 which always exists. If λ =6 0 we can divide by λ and we are looking for solutions of −1 (19.3) (Id −λ K)u = 0 −1 which is just (19.1) for another compact operator, namely λ K: What are properties of Id −K which migh show it to be ‘big? Here are three: Proposition 26.
    [Show full text]
  • Curl, Divergence and Laplacian
    Curl, Divergence and Laplacian What to know: 1. The definition of curl and it two properties, that is, theorem 1, and be able to predict qualitatively how the curl of a vector field behaves from a picture. 2. The definition of divergence and it two properties, that is, if div F~ 6= 0 then F~ can't be written as the curl of another field, and be able to tell a vector field of clearly nonzero,positive or negative divergence from the picture. 3. Know the definition of the Laplace operator 4. Know what kind of objects those operator take as input and what they give as output. The curl operator Let's look at two plots of vector fields: Figure 1: The vector field Figure 2: The vector field h−y; x; 0i: h1; 1; 0i We can observe that the second one looks like it is rotating around the z axis. We'd like to be able to predict this kind of behavior without having to look at a picture. We also promised to find a criterion that checks whether a vector field is conservative in R3. Both of those goals are accomplished using a tool called the curl operator, even though neither of those two properties is exactly obvious from the definition we'll give. Definition 1. Let F~ = hP; Q; Ri be a vector field in R3, where P , Q and R are continuously differentiable. We define the curl operator: @R @Q @P @R @Q @P curl F~ = − ~i + − ~j + − ~k: (1) @y @z @z @x @x @y Remarks: 1.
    [Show full text]
  • Numerical Operator Calculus in Higher Dimensions
    Numerical operator calculus in higher dimensions Gregory Beylkin* and Martin J. Mohlenkamp Applied Mathematics, University of Colorado, Boulder, CO 80309 Communicated by Ronald R. Coifman, Yale University, New Haven, CT, May 31, 2002 (received for review July 31, 2001) When an algorithm in dimension one is extended to dimension d, equation using only one-dimensional operations and thus avoid- in nearly every case its computational cost is taken to the power d. ing the exponential dependence on d. However, if the best This fundamental difficulty is the single greatest impediment to approximate solution of the form (Eq. 1) is not good enough, solving many important problems and has been dubbed the curse there is no way to improve the accuracy. of dimensionality. For numerical analysis in dimension d,we The natural extension of Eq. 1 is the form propose to use a representation for vectors and matrices that generalizes separation of variables while allowing controlled ac- r ͑ ͒ ϭ ͸ ␾l ͑ ͒ ␾l ͑ ͒ curacy. Basic linear algebra operations can be performed in this f x1,...,xd sl 1 x1 ··· d xd . [2] representation using one-dimensional operations, thus bypassing lϭ1 the exponential scaling with respect to the dimension. Although not all operators and algorithms may be compatible with this The key quantity in Eq. 2 is r, which we call the separation rank. representation, we believe that many of the most important ones By increasing r, the approximate solution can be made as are. We prove that the multiparticle Schro¨dinger operator, as well accurate as desired.
    [Show full text]
  • 23. Kernel, Rank, Range
    23. Kernel, Rank, Range We now study linear transformations in more detail. First, we establish some important vocabulary. The range of a linear transformation f : V ! W is the set of vectors the linear transformation maps to. This set is also often called the image of f, written ran(f) = Im(f) = L(V ) = fL(v)jv 2 V g ⊂ W: The domain of a linear transformation is often called the pre-image of f. We can also talk about the pre-image of any subset of vectors U 2 W : L−1(U) = fv 2 V jL(v) 2 Ug ⊂ V: A linear transformation f is one-to-one if for any x 6= y 2 V , f(x) 6= f(y). In other words, different vector in V always map to different vectors in W . One-to-one transformations are also known as injective transformations. Notice that injectivity is a condition on the pre-image of f. A linear transformation f is onto if for every w 2 W , there exists an x 2 V such that f(x) = w. In other words, every vector in W is the image of some vector in V . An onto transformation is also known as an surjective transformation. Notice that surjectivity is a condition on the image of f. 1 Suppose L : V ! W is not injective. Then we can find v1 6= v2 such that Lv1 = Lv2. Then v1 − v2 6= 0, but L(v1 − v2) = 0: Definition Let L : V ! W be a linear transformation. The set of all vectors v such that Lv = 0W is called the kernel of L: ker L = fv 2 V jLv = 0g: 1 The notions of one-to-one and onto can be generalized to arbitrary functions on sets.
    [Show full text]
  • The Spectral Theorem for Self-Adjoint and Unitary Operators Michael Taylor Contents 1. Introduction 2. Functions of a Self-Adjoi
    The Spectral Theorem for Self-Adjoint and Unitary Operators Michael Taylor Contents 1. Introduction 2. Functions of a self-adjoint operator 3. Spectral theorem for bounded self-adjoint operators 4. Functions of unitary operators 5. Spectral theorem for unitary operators 6. Alternative approach 7. From Theorem 1.2 to Theorem 1.1 A. Spectral projections B. Unbounded self-adjoint operators C. Von Neumann's mean ergodic theorem 1 2 1. Introduction If H is a Hilbert space, a bounded linear operator A : H ! H (A 2 L(H)) has an adjoint A∗ : H ! H defined by (1.1) (Au; v) = (u; A∗v); u; v 2 H: We say A is self-adjoint if A = A∗. We say U 2 L(H) is unitary if U ∗ = U −1. More generally, if H is another Hilbert space, we say Φ 2 L(H; H) is unitary provided Φ is one-to-one and onto, and (Φu; Φv)H = (u; v)H , for all u; v 2 H. If dim H = n < 1, each self-adjoint A 2 L(H) has the property that H has an orthonormal basis of eigenvectors of A. The same holds for each unitary U 2 L(H). Proofs can be found in xx11{12, Chapter 2, of [T3]. Here, we aim to prove the following infinite dimensional variant of such a result, called the Spectral Theorem. Theorem 1.1. If A 2 L(H) is self-adjoint, there exists a measure space (X; F; µ), a unitary map Φ: H ! L2(X; µ), and a 2 L1(X; µ), such that (1.2) ΦAΦ−1f(x) = a(x)f(x); 8 f 2 L2(X; µ): Here, a is real valued, and kakL1 = kAk.
    [Show full text]
  • Linear Operators
    C H A P T E R 10 Linear Operators Recall that a linear transformation T ∞ L(V) of a vector space into itself is called a (linear) operator. In this chapter we shall elaborate somewhat on the theory of operators. In so doing, we will define several important types of operators, and we will also prove some important diagonalization theorems. Much of this material is directly useful in physics and engineering as well as in mathematics. While some of this chapter overlaps with Chapter 8, we assume that the reader has studied at least Section 8.1. 10.1 LINEAR FUNCTIONALS AND ADJOINTS Recall that in Theorem 9.3 we showed that for a finite-dimensional real inner product space V, the mapping u ’ Lu = Óu, Ô was an isomorphism of V onto V*. This mapping had the property that Lauv = Óau, vÔ = aÓu, vÔ = aLuv, and hence Lau = aLu for all u ∞ V and a ∞ ®. However, if V is a complex space with a Hermitian inner product, then Lauv = Óau, vÔ = a*Óu, vÔ = a*Luv, and hence Lau = a*Lu which is not even linear (this was the definition of an anti- linear (or conjugate linear) transformation given in Section 9.2). Fortunately, there is a closely related result that holds even for complex vector spaces. Let V be finite-dimensional over ç, and assume that V has an inner prod- uct Ó , Ô defined on it (this is just a positive definite Hermitian form on V). Thus for any X, Y ∞ V we have ÓX, YÔ ∞ ç.
    [Show full text]
  • October 7, 2015 1 Adjoint of a Linear Transformation
    Mathematical Toolkit Autumn 2015 Lecture 4: October 7, 2015 Lecturer: Madhur Tulsiani The following is an extremely useful inequality. Prove it for a general inner product space (not neccessarily finite dimensional). Exercise 0.1 (Cauchy-Schwarz-Bunyakowsky inequality) Let u; v be any two vectors in an inner product space V . Then jhu; vij2 ≤ hu; ui · hv; vi Let V be a finite dimensional inner product space and let fw1; : : : ; wng be an orthonormal basis for P V . Then for any v 2 V , there exist c1; : : : ; cn 2 F such that v = i ci · wi. The coefficients ci are often called Fourier coefficients. Using the orthonormality and the properties of the inner product, we get ci = hv; wii. This can be used to prove the following Proposition 0.2 (Parseval's identity) Let V be a finite dimensional inner product space and let fw1; : : : ; wng be an orthonormal basis for V . Then, for any u; v 2 V n X hu; vi = hu; wii · hv; wii : i=1 1 Adjoint of a linear transformation Definition 1.1 Let V; W be inner product spaces over the same field F and let ' : V ! W be a linear transformation. A transformation '∗ : W ! V is called an adjoint of ' if h'(v); wi = hv; '∗(w)i 8v 2 V; w 2 W: n Pn Example 1.2 Let V = W = C with the inner product hu; vi = i=1 ui · vi. Let ' : V ! V be represented by the matrix A. Then '∗ is represented by the matrix AT . Exercise 1.3 Let 'left : Fib ! Fib be the left shift operator as before, and let hf; gi for f; g 2 Fib P1 f(n)g(n) ∗ be defined as hf; gi = n=0 Cn for C > 4.
    [Show full text]
  • Operator Calculus
    Pacific Journal of Mathematics OPERATOR CALCULUS PHILIP JOEL FEINSILVER Vol. 78, No. 1 March 1978 PACIFIC JOURNAL OF MATHEMATICS Vol. 78, No. 1, 1978 OPERATOR CALCULUS PHILIP FEINSILVER To an analytic function L(z) we associate the differential operator L(D), D denoting differentiation with respect to a real variable x. We interpret L as the generator of a pro- cess with independent increments having exponential mar- tingale m(x(t), t)= exp (zx(t) — tL{z)). Observing that m(x, —t)—ezCl where C—etLxe~tL, we study the operator calculus for C and an associated generalization of the operator xD, A—CD. We find what functions / have the property n that un~C f satisfy the evolution equation ut~Lu and the eigenvalue equations Aun—nun, thus generalizing the powers xn. We consider processes on RN as well as R1 and discuss various examples and extensions of the theory. In the case that L generates a Markov semigroup, we have transparent probabilistic interpretations. In case L may not gene- rate a probability semigroup, the general theory gives some insight into what properties any associated "processes with independent increments" should have. That is, the purpose is to elucidate the Markov case but in such a way that hopefully will lead to practi- cable definitions and will present useful ideas for defining more general processes—involving, say, signed and/or singular measures. IL Probabilistic basis. Let pt(%} be the transition kernel for a process p(t) with stationary independent increments. That is, ί pί(a?) = The Levy-Khinchine formula says that, generally: tξ tL{ίξ) e 'pt(x) = e R where L(ίξ) = aiξ - σψ/2 + [ eiξu - 1 - iζη(u)-M(du) with jΛ-fO} u% η(u) = u(\u\ ^ 1) + sgn^(M ^ 1) and ί 2M(du)< <*> .
    [Show full text]
  • The Holomorphic Functional Calculus Approach to Operator Semigroups
    THE HOLOMORPHIC FUNCTIONAL CALCULUS APPROACH TO OPERATOR SEMIGROUPS CHARLES BATTY, MARKUS HAASE, AND JUNAID MUBEEN In honour of B´elaSz.-Nagy (29 July 1913 { 21 December 1998) on the occasion of his centenary Abstract. In this article we construct a holomorphic functional calculus for operators of half-plane type and show how key facts of semigroup theory (Hille- Yosida and Gomilko-Shi-Feng generation theorems, Trotter-Kato approxima- tion theorem, Euler approximation formula, Gearhart-Pr¨usstheorem) can be elegantly obtained in this framework. Then we discuss the notions of bounded H1-calculus and m-bounded calculus on half-planes and their relation to weak bounded variation conditions over vertical lines for powers of the resolvent. Finally we discuss the Hilbert space case, where semigroup generation is char- acterised by the operator having a strong m-bounded calculus on a half-plane. 1. Introduction The theory of strongly continuous semigroups is more than 60 years old, with the fundamental result of Hille and Yosida dating back to 1948. Several monographs and textbooks cover material which is now canonical, each of them having its own particular point of view. One may focus entirely on the semigroup, and consider the generator as a derived concept (as in [12]) or one may start with the generator and view the semigroup as the inverse Laplace transform of the resolvent (as in [2]). Right from the beginning, functional calculus methods played an important role in semigroup theory. Namely, given a C0-semigroup T = (T (t))t≥0 which is uni- formly bounded, say, one can form the averages Z 1 Tµ := T (s) µ(ds) (strong integral) 0 for µ 2 M(R+) a complex Radon measure of bounded variation on R+ = [0; 1).
    [Show full text]
  • 216 Section 6.1 Chapter 6 Hermitian, Orthogonal, And
    216 SECTION 6.1 CHAPTER 6 HERMITIAN, ORTHOGONAL, AND UNITARY OPERATORS In Chapter 4, we saw advantages in using bases consisting of eigenvectors of linear opera- tors in a number of applications. Chapter 5 illustrated the benefit of orthonormal bases. Unfortunately, eigenvectors of linear operators are not usually orthogonal, and vectors in an orthonormal basis are not likely to be eigenvectors of any pertinent linear operator. There are operators, however, for which eigenvectors are orthogonal, and hence it is possible to have a basis that is simultaneously orthonormal and consists of eigenvectors. This chapter introduces some of these operators. 6.1 Hermitian Operators § When the basis for an n-dimensional real, inner product space is orthonormal, the inner product of two vectors u and v can be calculated with formula 5.48. If v not only represents a vector, but also denotes its representation as a column matrix, we can write the inner product as the product of two matrices, one a row matrix and the other a column matrix, (u, v) = uT v. If A is an n n real matrix, the inner product of u and the vector Av is × (u, Av) = uT (Av) = (uT A)v = (AT u)T v = (AT u, v). (6.1) This result, (u, Av) = (AT u, v), (6.2) allows us to move the matrix A from the second term to the first term in the inner product, but it must be replaced by its transpose AT . A similar result can be derived for complex, inner product spaces. When A is a complex matrix, we can use equation 5.50 to write T T T (u, Av) = uT (Av) = (uT A)v = (AT u)T v = A u v = (A u, v).
    [Show full text]
  • Functional Analysis (WS 19/20), Problem Set 8 (Spectrum And
    Functional Analysis (WS 19/20), Problem Set 8 (spectrum and adjoints on Hilbert spaces)1 In what follows, let H be a complex Hilbert space. Let T : H ! H be a bounded linear operator. We write T ∗ : H ! H for adjoint of T dened with hT x; yi = hx; T ∗yi: This operator exists and is uniquely determined by Riesz Representation Theorem. Spectrum of T is the set σ(T ) = fλ 2 C : T − λI does not have a bounded inverseg. Resolvent of T is the set ρ(T ) = C n σ(T ). Basic facts on adjoint operators R1. | Adjoint T ∗ exists and is uniquely determined. R2. | Adjoint T ∗ is a bounded linear operator and kT ∗k = kT k. Moreover, kT ∗T k = kT k2. R3. | Taking adjoints is an involution: (T ∗)∗ = T . R4. Adjoints commute with the sum: ∗ ∗ ∗. | (T1 + T2) = T1 + T2 ∗ ∗ R5. | For λ 2 C we have (λT ) = λ T . R6. | Let T be a bounded invertible operator. Then, (T ∗)−1 = (T −1)∗. R7. Let be bounded operators. Then, ∗ ∗ ∗. | T1;T2 (T1 T2) = T2 T1 R8. | We have relationship between kernel and image of T and T ∗: ker T ∗ = (im T )?; (ker T ∗)? = im T It will be helpful to prove that if M ⊂ H is a linear subspace, then M = M ??. Btw, this covers all previous results like if N is a nite dimensional linear subspace then N = N ?? (because N is closed). Computation of adjoints n n ∗ M1. ,Let A : R ! R be a complex matrix. Find A . 2 M2. | ,Let H = l (Z). For x = (:::; x−2; x−1; x0; x1; x2; :::) 2 H we dene the right shift operator −1 ∗ with (Rx)k = xk−1: Find kRk, R and R .
    [Show full text]
  • Divergence, Gradient and Curl Based on Lecture Notes by James
    Divergence, gradient and curl Based on lecture notes by James McKernan One can formally define the gradient of a function 3 rf : R −! R; by the formal rule @f @f @f grad f = rf = ^{ +| ^ + k^ @x @y @z d Just like dx is an operator that can be applied to a function, the del operator is a vector operator given by @ @ @ @ @ @ r = ^{ +| ^ + k^ = ; ; @x @y @z @x @y @z Using the operator del we can define two other operations, this time on vector fields: Blackboard 1. Let A ⊂ R3 be an open subset and let F~ : A −! R3 be a vector field. The divergence of F~ is the scalar function, div F~ : A −! R; which is defined by the rule div F~ (x; y; z) = r · F~ (x; y; z) @f @f @f = ^{ +| ^ + k^ · (F (x; y; z);F (x; y; z);F (x; y; z)) @x @y @z 1 2 3 @F @F @F = 1 + 2 + 3 : @x @y @z The curl of F~ is the vector field 3 curl F~ : A −! R ; which is defined by the rule curl F~ (x; x; z) = r × F~ (x; y; z) ^{ |^ k^ = @ @ @ @x @y @z F1 F2 F3 @F @F @F @F @F @F = 3 − 2 ^{ − 3 − 1 |^+ 2 − 1 k:^ @y @z @x @z @x @y Note that the del operator makes sense for any n, not just n = 3. So we can define the gradient and the divergence in all dimensions. However curl only makes sense when n = 3. Blackboard 2. The vector field F~ : A −! R3 is called rotation free if the curl is zero, curl F~ = ~0, and it is called incompressible if the divergence is zero, div F~ = 0.
    [Show full text]