Hilbert Spaces

Total Page:16

File Type:pdf, Size:1020Kb

Hilbert Spaces ó Hilbert spaces The proof of the Hilbert basis theorem is not mathematics, it is theology. — Camille Jordan Wir müssen wissen, wir werden wissen. — David Hilbert We now continue to study a special class of Banach spaces, namely Hilbert spaces, in which the presence of a so-called “inner product” al- lows us to define angles between elements. In particular, we can intro- duce the geometric concept of orthogonality. This has far-reaching con- sequences. ó.G Deänition and Examples Definition 4.1. Let H be a vector space over K. An inner product (or a scalar product) on H is a map (· j ·) : H × H ! K such that the following three properties hold. (IP1) For all x 2 H, one has (x j x) ≥ 0, and (x j x) = 0 if and only if x = 0. (IP2) For all x, y 2 H, one has (x j y) = (y j x). (IP3) For all x, y, z 2 H and l 2 K, one has (lx + y j z) = l(x j z) + (y j z). The pair (H, (· j ·)) is called inner product space or pre-Hilbert space. Example 4.2. (a) On H = KN, N (x j y) := ∑ xkyk k=1 defines an inner product. 99 4 Hilbert spaces (b) On H = `2, ¥ (x j y) := ∑ xkyk k=1 for x = (xk) and y = (yk) defines an inner product. (c) On C([a, b]), Z b ( f j g) := f (t)g(t) dt a defines an inner product. If w : [a, b] ! R is such that there exist constants 0 < # < M, such that # ≤ w(t) ≤ M for all t 2 [a, b], then also Z b ( f j g)w := f (t)g(t)w(t) dt a defines an inner product on C([a, b]). (d) If (W, S, m) is a measure space, then Z ( f j g) := f g dm W defines an inner product on L2(W, S, m). Note that for examples (b), (c) and (d), it follows from Hölder’s in- equality that the sum, respectively the integral, is well-defined. Remark 4.3. Condition (IP1) is called definiteness of (· j ·), (IP2) is called symmetry. Note that if K = R, then (IP2) reduces to (x j y) = (y j x). (IP3) states that (· j ·) is linear in the first component. Note that it follows from (IP2) and (IP3) that (x j ly + z) = l(x j y) + (x j z). If (H, (· j ·)) is an inner product space, we set q kxk := (x j x). Lemma 4.4 (Cauchy–Schwarz). Let (H, (· j ·)) be an inner product space. Then j(x j y)j ≤ kxk · kyk (4.1) for all x, y 2 H and equality holds if and only if x and y are linearly dependent. Proof. Let l 2 K. Then 0 ≤ (x + ly j x + ly) = (x j x) + l(y j x) + l(x j y) + jlj2(y j y) = kxk2 + 2 Re(l(x j y)) + jlj2kyk2. If y = 0, then (x j y) = 0 and the claimed inequality holds true. If y 6= 0, 100 4.1 Definition and Examples − we may put l = −(x j y)kyk 2. Then we obtain j(x j y)j2 j−(x j y)j2 j(x j y)j2 0 ≤ kxk2 − 2 Re + kyk2 = kxk2 − , kyk2 kyk4 kyk2 which establishes (4.1). By (IP1), equality holds if and only if x = −ly. p Lemma 4.5. Let (H, (· j ·)) be an inner product space. Then k·k := (· j ·) defines a norm on H. Proof. If kxk = 0, then (x j x) = 0 and therefore x = 0 by (IP1). This proves (N1). For (N2), let x 2 H and l 2 K be given. Then klxk2 = (lx j lx) = ll(x j x) = jlj2kxk2. Property (N3) holds since kx + yk2 = kxk2 + 2 Re (x j y) + kyk2 ≤ kxk2 + 2kxkkyk + kyk2 = (kxk + kyk)2, where we used the Cauchy–Schwarz inequality. Definition 4.6. A Hilbert space is an inner product space (H, (· j ·)) which p is complete with respect to the norm k·k := (· j ·). Example 4.7. The inner product spaces from Example 4.2 (a), (b) and (d) are Hilbert spaces, see for example Theorem 3.105. The inner product space from Example 4.2 (c) is not complete and therefore not a Hilbert space. It suffices to note that C([a, b]) is dense in L2([a, b]) and that the given inner product gives rise to the restriction of the usual norm in L2([a, b]) to C([a, b]). Lemma 4.8 (Parallelogram identity). Let H be an inner product space. Then In fact, also a con- verse holds: If the kx + yk2 + kx − yk2 = 2kxk2 + 2kyk2 parallelogramm identity holds in for all x, y 2 H. a normed space (X, k·k), then there Proof. exists a unique 2 2 inner product (· j ·) kx + yk + kx − yk on X such that 2 = kxk2 + 2 Re (x j y) + kyk2 + kxk2 − 2 Re (x j y) + kyk2 kxk = (x j x). = 2(kxk2 + kyk2) d p p Exercise 4.9. Show that in (R , k·kp), (` , k·kp) and (L ([0, 1]), k·kp) the parallelogram identity fails if p 6= 2. Hence in this case there is no inner p product (· j ·)p such that kxkp = (x j x)p. Lemma 4.10. Let H be an inner product space. Then (· j ·) : H × H ! K is continuous. 101 4 Hilbert spaces Proof. Let xn ! x and yn ! y. Then M := supn2Nkxnk < ¥. By the Cauchy–Schwarz inequality, j(xn j yn) − (x j y)j ≤ j(xn j yn − y)j + j(xn − x j y)j ≤ Mkyn − yk + kykkxn − xk ! 0. ó.³ Orthogonal Projection In this section we show that in a Hilbert space one can project onto any closed subspace. In other words, for any point there exists a unique best approximation in any given closed subspace. For this the geometric prop- erties arising from the inner product are crucial. In fact, an inner Definition 4.11. Let (H, (· j ·)) be an inner product space. We say that product allows two vectors x, y 2 H are orthogonal (and write x ? y) if (x j y) = 0. to define angles Given a subset S ⊂ H, the annihilator S? of S is defined by between two vectors. Consider H = R2 ? := f 2 : ? 2 g and describe the S y H y x for all x S . angle between two ? vectors x, y 2 R2 If S is a subspace, then S is also called the orthogonal complement of S. with kxk = kyk = 1 in terms of the inner In an inner product space the following fundamental and classic geo- product! metric identity holds. Lemma 4.12 (Pythagoras). Let H be an inner product space. If x ? y, then kx + yk2 = kxk2 + kyk2. Proof. kx + yk2 = kxk2 + 2 Re (x j y) + kyk2 = kxk2 + kyk2. Proposition 4.13. Let H be an inner product space and S ⊂ H. (a) S? is a closed, linear subspace of H. (b) span S ⊂ (S?)?. (c) span S \ S? = f0g. Proof. (a) If x, y 2 S? and l 2 K, then, for z 2 S, we have (lx + y j z) = l(x j z) + (y j z) = 0, hence lx + y 2 S?. This shows that S? is a linear ? subspace. If (xn) is a sequence in S which converges to x, then we infer from Lemma 4.10 that (x j z) = lim (xn j z) = 0 for all z 2 S. (b) By (a), (S?)? is a closed linear subspace which contains S. Thus span S ⊂ (S?)?. (c) If x 2 span S \ S?, then, by (b), x 2 S? \ (S?)? and hence x ? x. But this means (x j x) = 0. By (IP1) it follows that x = 0. 102 4.2 Orthogonal Projection We now come to the main result of this section. Theorem 4.14. Let (H, (· j ·)) be a Hilbert space and K ⊂ H be a closed linear Give an example `2 subspace. Then for every x 2 H, there exists a unique element PKx of K such of a subspace of that that is not closed! kPKx − xk = min fky − xk : y 2 Kg. Proof. Let d := inf fky − xk : y 2 Kg. By the definition of the infimum, there exists a sequence (yn) in K with kyn − xk ! d. Applying the paral- lelogram identity 4.8 to the vectors x − yn and x − ym, we obtain 2 2 2(kx − ynk + kx − ymk ) 2 2 = k(x − yn) + (x − ym)k + kx − yn − (x − ym)k 2 2 = k2x − yn − ymk + kyn − ymk 1 2 = 4 x − (y + y ) + ky − y k2. 2 n m n m 1 2 2 Since znm := 2 (yn + ym) 2 K, we have kx − znmk ≥ d and thus In a Hilbert space the orthogonal pro- 2 2 2 2 kyn − ymk ≤ 2(kx − ynk + kx − ymk ) − 4d . jection can be more generally defined for all closed and By the choice of the sequence (yn), the right-hand side of this equation ! ( ) convex subsets. But converges to 0 as n, m ¥, proving that yn is a Cauchy sequence. then the orthogoal Since H is complete, (yn) converges to some vector PKx. Since K is closed, projection does not PKx 2 K. We have thus proved existence. need to be a linear As for uniqueness, if kz − xk = min fky − xk : y 2 Kg, then, by the map, of course. parallelogram identity, 1 2 2kx − P xk2 + 2kx − zk2 = 4 x − (P x + z) + kz − P xk2 K 2 K K and thus 1 2 4d2 = 4 x − (P x + z) + kP x − zk2 ≥ 4d2 + kP x − zk2 2 K K K proving that kPKx − zk = 0; hence PKx = z.
Recommended publications
  • The Grassmann Manifold
    The Grassmann Manifold 1. For vector spaces V and W denote by L(V; W ) the vector space of linear maps from V to W . Thus L(Rk; Rn) may be identified with the space Rk£n of k £ n matrices. An injective linear map u : Rk ! V is called a k-frame in V . The set k n GFk;n = fu 2 L(R ; R ): rank(u) = kg of k-frames in Rn is called the Stiefel manifold. Note that the special case k = n is the general linear group: k k GLk = fa 2 L(R ; R ) : det(a) 6= 0g: The set of all k-dimensional (vector) subspaces ¸ ½ Rn is called the Grassmann n manifold of k-planes in R and denoted by GRk;n or sometimes GRk;n(R) or n GRk(R ). Let k ¼ : GFk;n ! GRk;n; ¼(u) = u(R ) denote the map which assigns to each k-frame u the subspace u(Rk) it spans. ¡1 For ¸ 2 GRk;n the fiber (preimage) ¼ (¸) consists of those k-frames which form a basis for the subspace ¸, i.e. for any u 2 ¼¡1(¸) we have ¡1 ¼ (¸) = fu ± a : a 2 GLkg: Hence we can (and will) view GRk;n as the orbit space of the group action GFk;n £ GLk ! GFk;n :(u; a) 7! u ± a: The exercises below will prove the following n£k Theorem 2. The Stiefel manifold GFk;n is an open subset of the set R of all n £ k matrices. There is a unique differentiable structure on the Grassmann manifold GRk;n such that the map ¼ is a submersion.
    [Show full text]
  • 2 Hilbert Spaces You Should Have Seen Some Examples Last Semester
    2 Hilbert spaces You should have seen some examples last semester. The simplest (finite-dimensional) ex- C • A complex Hilbert space H is a complete normed space over whose norm is derived from an ample is Cn with its standard inner product. It’s worth recalling from linear algebra that if V is inner product. That is, we assume that there is a sesquilinear form ( , ): H H C, linear in · · × → an n-dimensional (complex) vector space, then from any set of n linearly independent vectors we the first variable and conjugate linear in the second, such that can manufacture an orthonormal basis e1, e2,..., en using the Gram-Schmidt process. In terms of this basis we can write any v V in the form (f ,д) = (д, f ), ∈ v = a e , a = (v, e ) (f , f ) 0 f H, and (f , f ) = 0 = f = 0. i i i i ≥ ∀ ∈ ⇒ ∑ The norm and inner product are related by which can be derived by taking the inner product of the equation v = aiei with ei. We also have n ∑ (f , f ) = f 2. ∥ ∥ v 2 = a 2. ∥ ∥ | i | i=1 We will always assume that H is separable (has a countable dense subset). ∑ Standard infinite-dimensional examples are l2(N) or l2(Z), the space of square-summable As usual for a normed space, the distance on H is given byd(f ,д) = f д = (f д, f д). • • ∥ − ∥ − − sequences, and L2(Ω) where Ω is a measurable subset of Rn. The Cauchy-Schwarz and triangle inequalities, • √ (f ,д) f д , f + д f + д , | | ≤ ∥ ∥∥ ∥ ∥ ∥ ≤ ∥ ∥ ∥ ∥ 2.1 Orthogonality can be derived fairly easily from the inner product.
    [Show full text]
  • Orthogonal Complements (Revised Version)
    Orthogonal Complements (Revised Version) Math 108A: May 19, 2010 John Douglas Moore 1 The dot product You will recall that the dot product was discussed in earlier calculus courses. If n x = (x1: : : : ; xn) and y = (y1: : : : ; yn) are elements of R , we define their dot product by x · y = x1y1 + ··· + xnyn: The dot product satisfies several key axioms: 1. it is symmetric: x · y = y · x; 2. it is bilinear: (ax + x0) · y = a(x · y) + x0 · y; 3. and it is positive-definite: x · x ≥ 0 and x · x = 0 if and only if x = 0. The dot product is an example of an inner product on the vector space V = Rn over R; inner products will be treated thoroughly in Chapter 6 of [1]. Recall that the length of an element x 2 Rn is defined by p jxj = x · x: Note that the length of an element x 2 Rn is always nonnegative. Cauchy-Schwarz Theorem. If x 6= 0 and y 6= 0, then x · y −1 ≤ ≤ 1: (1) jxjjyj Sketch of proof: If v is any element of Rn, then v · v ≥ 0. Hence (x(y · y) − y(x · y)) · (x(y · y) − y(x · y)) ≥ 0: Expanding using the axioms for dot product yields (x · x)(y · y)2 − 2(x · y)2(y · y) + (x · y)2(y · y) ≥ 0 or (x · x)(y · y)2 ≥ (x · y)2(y · y): 1 Dividing by y · y, we obtain (x · y)2 jxj2jyj2 ≥ (x · y)2 or ≤ 1; jxj2jyj2 and (1) follows by taking the square root.
    [Show full text]
  • Does Geometric Algebra Provide a Loophole to Bell's Theorem?
    Discussion Does Geometric Algebra provide a loophole to Bell’s Theorem? Richard David Gill 1 1 Leiden University, Faculty of Science, Mathematical Institute; [email protected] Version October 30, 2019 submitted to Entropy Abstract: Geometric Algebra, championed by David Hestenes as a universal language for physics, was used as a framework for the quantum mechanics of interacting qubits by Chris Doran, Anthony Lasenby and others. Independently of this, Joy Christian in 2007 claimed to have refuted Bell’s theorem with a local realistic model of the singlet correlations by taking account of the geometry of space as expressed through Geometric Algebra. A series of papers culminated in a book Christian (2014). The present paper first explores Geometric Algebra as a tool for quantum information and explains why it did not live up to its early promise. In summary, whereas the mapping between 3D geometry and the mathematics of one qubit is already familiar, Doran and Lasenby’s ingenious extension to a system of entangled qubits does not yield new insight but just reproduces standard QI computations in a clumsy way. The tensor product of two Clifford algebras is not a Clifford algebra. The dimension is too large, an ad hoc fix is needed, several are possible. I further analyse two of Christian’s earliest, shortest, least technical, and most accessible works (Christian 2007, 2011), exposing conceptual and algebraic errors. Since 2015, when the first version of this paper was posted to arXiv, Christian has published ambitious extensions of his theory in RSOS (Royal Society - Open Source), arXiv:1806.02392, and in IEEE Access, arXiv:1405.2355.
    [Show full text]
  • LINEAR ALGEBRA FALL 2007/08 PROBLEM SET 9 SOLUTIONS In
    MATH 110: LINEAR ALGEBRA FALL 2007/08 PROBLEM SET 9 SOLUTIONS In the following V will denote a finite-dimensional vector space over R that is also an inner product space with inner product denoted by h ·; · i. The norm induced by h ·; · i will be denoted k · k. 1. Let S be a subset (not necessarily a subspace) of V . We define the orthogonal annihilator of S, denoted S?, to be the set S? = fv 2 V j hv; wi = 0 for all w 2 Sg: (a) Show that S? is always a subspace of V . ? Solution. Let v1; v2 2 S . Then hv1; wi = 0 and hv2; wi = 0 for all w 2 S. So for any α; β 2 R, hαv1 + βv2; wi = αhv1; wi + βhv2; wi = 0 ? for all w 2 S. Hence αv1 + βv2 2 S . (b) Show that S ⊆ (S?)?. Solution. Let w 2 S. For any v 2 S?, we have hv; wi = 0 by definition of S?. Since this is true for all v 2 S? and hw; vi = hv; wi, we see that hw; vi = 0 for all v 2 S?, ie. w 2 S?. (c) Show that span(S) ⊆ (S?)?. Solution. Since (S?)? is a subspace by (a) (it is the orthogonal annihilator of S?) and S ⊆ (S?)? by (b), we have span(S) ⊆ (S?)?. ? ? (d) Show that if S1 and S2 are subsets of V and S1 ⊆ S2, then S2 ⊆ S1 . ? Solution. Let v 2 S2 . Then hv; wi = 0 for all w 2 S2 and so for all w 2 S1 (since ? S1 ⊆ S2).
    [Show full text]
  • Signing a Linear Subspace: Signature Schemes for Network Coding
    Signing a Linear Subspace: Signature Schemes for Network Coding Dan Boneh1?, David Freeman1?? Jonathan Katz2???, and Brent Waters3† 1 Stanford University, {dabo,dfreeman}@cs.stanford.edu 2 University of Maryland, [email protected] 3 University of Texas at Austin, [email protected]. Abstract. Network coding offers increased throughput and improved robustness to random faults in completely decentralized networks. In contrast to traditional routing schemes, however, network coding requires intermediate nodes to modify data packets en route; for this reason, standard signature schemes are inapplicable and it is a challenge to provide resilience to tampering by malicious nodes. Here, we propose two signature schemes that can be used in conjunction with network coding to prevent malicious modification of data. In particular, our schemes can be viewed as signing linear subspaces in the sense that a signature σ on V authenticates exactly those vectors in V . Our first scheme is homomorphic and has better performance, with both public key size and per-packet overhead being constant. Our second scheme does not rely on random oracles and uses weaker assumptions. We also prove a lower bound on the length of signatures for linear subspaces showing that both of our schemes are essentially optimal in this regard. 1 Introduction Network coding [1, 25] refers to a general class of routing mechanisms where, in contrast to tra- ditional “store-and-forward” routing, intermediate nodes modify data packets in transit. Network coding has been shown to offer a number of advantages with respect to traditional routing, the most well-known of which is the possibility of increased throughput in certain network topologies (see, e.g., [21] for measurements of the improvement network coding gives even for unicast traffic).
    [Show full text]
  • Quantification of Stability in Systems of Nonlinear Ordinary Differential Equations Jason Terry
    University of New Mexico UNM Digital Repository Mathematics & Statistics ETDs Electronic Theses and Dissertations 2-14-2014 Quantification of Stability in Systems of Nonlinear Ordinary Differential Equations Jason Terry Follow this and additional works at: https://digitalrepository.unm.edu/math_etds Recommended Citation Terry, Jason. "Quantification of Stability in Systems of Nonlinear Ordinary Differential Equations." (2014). https://digitalrepository.unm.edu/math_etds/48 This Dissertation is brought to you for free and open access by the Electronic Theses and Dissertations at UNM Digital Repository. It has been accepted for inclusion in Mathematics & Statistics ETDs by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected]. Candidate Department This dissertation is approved, and it is acceptable in quality and form for publication: Approved by the Dissertation Committee: , Chairperson Quantification of Stability in Systems of Nonlinear Ordinary Differential Equations by Jason Terry B.A., Mathematics, California State University Fresno, 2003 B.S., Computer Science, California State University Fresno, 2003 M.A., Interdiscipline Studies, California State University Fresno, 2005 M.S., Applied Mathematics, University of New Mexico, 2009 DISSERTATION Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy Mathematics The University of New Mexico Albuquerque, New Mexico December, 2013 c 2013, Jason Terry iii Dedication To my mom. iv Acknowledgments I would like
    [Show full text]
  • Lecture 6: Linear Codes 1 Vector Spaces 2 Linear Subspaces 3
    Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 6: Linear Codes January 26, 2009 Lecturer: Atri Rudra Scribe: Steve Uurtamo 1 Vector Spaces A vector space V over a field F is an abelian group under “+” such that for every α ∈ F and every v ∈ V there is an element αv ∈ V , and such that: i) α(v1 + v2) = αv1 + αv2, for α ∈ F, v1, v2 ∈ V. ii) (α + β)v = αv + βv, for α, β ∈ F, v ∈ V. iii) α(βv) = (αβ)v for α, β ∈ F, v ∈ V. iv) 1v = v for all v ∈ V, where 1 is the unit element of F. We can think of the field F as being a set of “scalars” and the set V as a set of “vectors”. If the field F is a finite field, and our alphabet Σ has the same number of elements as F , we can associate strings from Σn with vectors in F n in the obvious way, and we can think of codes C as being subsets of F n. 2 Linear Subspaces Assume that we’re dealing with a vector space of dimension n, over a finite field with q elements. n We’ll denote this as: Fq . Linear subspaces of a vector space are simply subsets of the vector space that are closed under vector addition and scalar multiplication: n n In particular, S ⊆ Fq is a linear subspace of Fq if: i) For all v1, v2 ∈ S, v1 + v2 ∈ S. ii) For all α ∈ Fq, v ∈ S, αv ∈ S. Note that the vector space itself is a linear subspace, and that the zero vector is always an element of every linear subspace.
    [Show full text]
  • Worksheet 1, for the MATLAB Course by Hans G. Feichtinger, Edinburgh, Jan
    Worksheet 1, for the MATLAB course by Hans G. Feichtinger, Edinburgh, Jan. 9th Start MATLAB via: Start > Programs > School applications > Sci.+Eng. > Eng.+Electronics > MATLAB > R2007 (preferably). Generally speaking I suggest that you start your session by opening up a diary with the command: diary mydiary1.m and concluding the session with the command diary off. If you want to save your workspace you my want to call save today1.m in order to save all the current variable (names + values). Moreover, using the HELP command from MATLAB you can get help on more or less every MATLAB command. Try simply help inv or help plot. 1. Define an arbitrary (random) 3 × 3 matrix A, and check whether it is invertible. Calculate the inverse matrix. Then define an arbitrary right hand side vector b and determine the (unique) solution to the equation A ∗ x = b. Find in two different ways the solution to this problem, by either using the inverse matrix, or alternatively by applying Gauss elimination (i.e. the RREF command) to the extended system matrix [A; b]. In addition look at the output of the command rref([A,eye(3)]). What does it tell you? 2. Produce an \arbitrary" 7 × 7 matrix of rank 5. There are at east two simple ways to do this. Either by factorization, i.e. by obtaining it as a product of some 7 × 5 matrix with another random 5 × 7 matrix, or by purposely making two of the rows or columns linear dependent from the remaining ones. Maybe the second method is interesting (if you have time also try the first one): 3.
    [Show full text]
  • A Guided Tour to the Plane-Based Geometric Algebra PGA
    A Guided Tour to the Plane-Based Geometric Algebra PGA Leo Dorst University of Amsterdam Version 1.15{ July 6, 2020 Planes are the primitive elements for the constructions of objects and oper- ators in Euclidean geometry. Triangulated meshes are built from them, and reflections in multiple planes are a mathematically pure way to construct Euclidean motions. A geometric algebra based on planes is therefore a natural choice to unify objects and operators for Euclidean geometry. The usual claims of `com- pleteness' of the GA approach leads us to hope that it might contain, in a single framework, all representations ever designed for Euclidean geometry - including normal vectors, directions as points at infinity, Pl¨ucker coordinates for lines, quaternions as 3D rotations around the origin, and dual quaternions for rigid body motions; and even spinors. This text provides a guided tour to this algebra of planes PGA. It indeed shows how all such computationally efficient methods are incorporated and related. We will see how the PGA elements naturally group into blocks of four coordinates in an implementation, and how this more complete under- standing of the embedding suggests some handy choices to avoid extraneous computations. In the unified PGA framework, one never switches between efficient representations for subtasks, and this obviously saves any time spent on data conversions. Relative to other treatments of PGA, this text is rather light on the mathematics. Where you see careful derivations, they involve the aspects of orientation and magnitude. These features have been neglected by authors focussing on the mathematical beauty of the projective nature of the algebra.
    [Show full text]
  • The Orthogonal Projection and the Riesz Representation Theorem1
    FORMALIZED MATHEMATICS Vol. 23, No. 3, Pages 243–252, 2015 DOI: 10.1515/forma-2015-0020 degruyter.com/view/j/forma The Orthogonal Projection and the Riesz Representation Theorem1 Keiko Narita Noboru Endou Hirosaki-city Gifu National College of Technology Aomori, Japan Gifu, Japan Yasunari Shidama Shinshu University Nagano, Japan Summary. In this article, the orthogonal projection and the Riesz re- presentation theorem are mainly formalized. In the first section, we defined the norm of elements on real Hilbert spaces, and defined Mizar functor RUSp2RNSp, real normed spaces as real Hilbert spaces. By this definition, we regarded sequ- ences of real Hilbert spaces as sequences of real normed spaces, and proved some properties of real Hilbert spaces. Furthermore, we defined the continuity and the Lipschitz the continuity of functionals on real Hilbert spaces. Referring to the article [15], we also defined some definitions on real Hilbert spaces and proved some theorems for defining dual spaces of real Hilbert spaces. As to the properties of all definitions, we proved that they are equivalent pro- perties of functionals on real normed spaces. In Sec. 2, by the definitions [11], we showed properties of the orthogonal complement. Then we proved theorems on the orthogonal decomposition of elements of real Hilbert spaces. They are the last two theorems of existence and uniqueness. In the third and final section, we defined the kernel of linear functionals on real Hilbert spaces. By the last three theorems, we showed the Riesz representation theorem, existence, uniqu- eness, and the property of the norm of bounded linear functionals on real Hilbert spaces.
    [Show full text]
  • 3. Hilbert Spaces
    3. Hilbert spaces In this section we examine a special type of Banach spaces. We start with some algebraic preliminaries. Definition. Let K be either R or C, and let Let X and Y be vector spaces over K. A map φ : X × Y → K is said to be K-sesquilinear, if • for every x ∈ X, then map φx : Y 3 y 7−→ φ(x, y) ∈ K is linear; • for every y ∈ Y, then map φy : X 3 y 7−→ φ(x, y) ∈ K is conjugate linear, i.e. the map X 3 x 7−→ φy(x) ∈ K is linear. In the case K = R the above properties are equivalent to the fact that φ is bilinear. Remark 3.1. Let X be a vector space over C, and let φ : X × X → C be a C-sesquilinear map. Then φ is completely determined by the map Qφ : X 3 x 7−→ φ(x, x) ∈ C. This can be see by computing, for k ∈ {0, 1, 2, 3} the quantity k k k k k k k Qφ(x + i y) = φ(x + i y, x + i y) = φ(x, x) + φ(x, i y) + φ(i y, x) + φ(i y, i y) = = φ(x, x) + ikφ(x, y) + i−kφ(y, x) + φ(y, y), which then gives 3 1 X (1) φ(x, y) = i−kQ (x + iky), ∀ x, y ∈ X. 4 φ k=0 The map Qφ : X → C is called the quadratic form determined by φ. The identity (1) is referred to as the Polarization Identity.
    [Show full text]