3. Hilbert Spaces

Total Page:16

File Type:pdf, Size:1020Kb

3. Hilbert Spaces 3. Hilbert spaces In this section we examine a special type of Banach spaces. We start with some algebraic preliminaries. Definition. Let K be either R or C, and let Let X and Y be vector spaces over K. A map φ : X × Y → K is said to be K-sesquilinear, if • for every x ∈ X, then map φx : Y 3 y 7−→ φ(x, y) ∈ K is linear; • for every y ∈ Y, then map φy : X 3 y 7−→ φ(x, y) ∈ K is conjugate linear, i.e. the map X 3 x 7−→ φy(x) ∈ K is linear. In the case K = R the above properties are equivalent to the fact that φ is bilinear. Remark 3.1. Let X be a vector space over C, and let φ : X × X → C be a C-sesquilinear map. Then φ is completely determined by the map Qφ : X 3 x 7−→ φ(x, x) ∈ C. This can be see by computing, for k ∈ {0, 1, 2, 3} the quantity k k k k k k k Qφ(x + i y) = φ(x + i y, x + i y) = φ(x, x) + φ(x, i y) + φ(i y, x) + φ(i y, i y) = = φ(x, x) + ikφ(x, y) + i−kφ(y, x) + φ(y, y), which then gives 3 1 X (1) φ(x, y) = i−kQ (x + iky), ∀ x, y ∈ X. 4 φ k=0 The map Qφ : X → C is called the quadratic form determined by φ. The identity (1) is referred to as the Polarization Identity. Remark that the Polarization Identity can also gives 3 1 X φ(y, x) = ikQ (x + iky), ∀ x, y ∈ X. 4 φ k=0 In particular, we see that the following conditions are equivalent (i) Qφ is real-valued, i.e. Qφ(x) ∈ R, ∀ x ∈ X; (ii) φ is sesqui-symmetric, i.e. φ(y, x) = φ(x, y), ∀ x, y ∈ X. Definition. Let K be one of the fields R or C, and let X be a vector space over K.A K-sesquilinear map φ : X × X → K is said to be positive definite, if • φ is sesqui-symmetric; • φ(x, x) ≥ 0, ∀ x ∈ X. As observed before, in the complex case, the second condition actually forces the first one. With this terminology, we have the following useful technical result. 79 80 CHAPTER II: ELEMENTS OF FUNCTIONAL ANALYSIS Proposition 3.1 (Cauchy-Bunyakowski-Schwartz Inequality). Let K be a K- vector space, and let φ : X × X → K be a positive definite K-sesquilinear map. Then (2) |φ(x, y)|2 ≤ φ(x, x) · φ(y, y), ∀ x, y ∈ X. Moreover, if equality holds then either x = y = 0, or there exists some α ∈ K with φ(x + αy, x + αy) = 0. Proof. Fix x, y ∈ X, and choose a number λ ∈ K, with |λ| = 1, such that |φ(x, y)| = λφ(x, y) = φ(x, λy). Define the map F : K → K by F (ζ) = φ(ζλy + x, ζλy + x), ∀ ζ ∈ K. A simple computation gives F (ζ) = ζλζ¯λφ¯ (y, y) + +ζλφ(x, y)ζ¯λφ¯ (y, x) + φ(x, x) = = |ζ|2|λ|2φ(y, y) + ζλφ(x, y) + ζ¯λφ(x, y) + φ(x, x) = 2 = |ζ| φ(y, y) + ζ|φ(x, y)| + ζ¯|φ(x, y)| + φ(x, x), ∀ z ∈ R. In particular, when we restrict F to R, it becomes a quadratic function: 2 F (t) = at + bt + c, ∀ t ∈ R, where a = φ(y, y) ≥ 0, b = 2|φ(x, y)|, c = φ(x, x). Notice that we have F (t) ≥ 0, ∀ t ∈ R. This forces either a = b = 0, in which case the inequality (2) is trivial, or a > 0, and b2 − 4ac ≤ 0, which reads 4|φ(x, y)|2 − 4φ(x, x)φ(y, y) ≤ 0, and again (2) follows. Let us examine now when equality holds in (2). We assume that either x 6= 0 or y 6= 0. In fact we can also assume that φ(x, x) > 0 and φ(y, y) > 0, so the fact that we have equality in (2) is equivalent to b2 − 4ac = 0, which in terms of quadratic equations says that the equation F (t) = at2 + bt + c = 0 has a (unique) solution t0. If we take α = t0λ, then the equality F (t0) = 0 is precisely φ(αy + x, αy + x) = 0. Definition. An inner product on X is a positive definite K-sesquilinear map X × X 3 (ξ, η) 7−→ ξ η ∈ K, which is non-degenerate, in the sense that • if ξ ∈ X satisfies ξ ξ = 0, then ξ = 0. Proposition 3.2. Let · · be an inner product on the K-vector space X. (i) If we define, for every ξ ∈ X the quantity kξk = p(ξ|ξ), then the map X 3 ξ 7−→ kξk ∈ [0, ∞) defines a norm on X. §3. Hilbert spaces 81 (ii) For every ξ, η ∈ X, one has the Cauchy-Bunyakovski-Schwartz inequality: (3) ξ η ≤ kξk · kηk. Moreover, if equality holds then ξ and η are proportional, in the sense that either ξ = 0, or η = 0, or ξ = λη, for some λ ∈ K. Proof. First of all, remark that by the non-degeneracy of the inner product, one has the implication (4) kξk = 0 ⇒ ξ = 0. The inequality (3) is immediate from Proposition 3.1. Moreover, if we have equality, then by Proposition 3.1, we also know that either ξ = 0, or ξ + αη ξ + αη = 0, for some α ∈ K. This is equivalent to kξ + αηk2 = 0, so by (4), we must have ξ = −αη. Finally, in order to check that k . k is a norm, we must prove (a) kλξk = |λ| · kξk, ∀ λ ∈ K, ξ ∈ X; (b) kξ + ηk ≤ kξk + kηk, ∀ ξ, η ∈ X. Property (a) is trivial, by seqsquilinearity: 2 2 2 kλξk = λξ λξ = λλ¯ λξ λξ = |λ| · kξk . Finally, for ξ, η ∈ X, we have 2 kξ + ηk = ξ + η ξ + η = ξ ξ + η η + ξ η + η ξ = 2 2 2 2 = kξk + kηk + ξ η + ξ η = kξk + kηk + 2Re ξ η . We now use the C-B-S inequality (3), which allows us to continue the above estimate to get 2 2 2 2 2 kξ + ηk = kξk + kηk + 2Re ξ η ≤ kξk + kηk + 2 ξ η ≤ 2 ≤ kξk2 + kηk2 + 2kξk · kηk = kξk + kηk , so we immediately get kξ + ηk ≤ kξk + kηk. Exercise 1. Use the above notations, and assume we have two vectors ξ, η 6= 0, such that kξ +ηk = kξk+kηk. Prove that there exists some λ > 0 such that ξ = λη. Lemma 3.1. Let X be a K-vector space, equipped with an inner product. (ii) [Parallelogram Law] kξ + ηk2 + kξ − ηk2 = 2kξk2 + 2kηk2. (i) [Polarization Identities] (a) If K = R, then 1 2 2 ξ η = kξ + ηk − kξ − ηk , ∀ ξ, η ∈ X. 4 (b) If K = R, then 3 1 X −k k 2 ξ η = i kξ + i ηk , ∀ ξ, η ∈ X. 4 k=0 Proof. (i). This is obvious, by the computations from the proof of Proposition 3.2, which give 2 2 2 kξ ± ηk = kξk + kηk ± 2Re ξ η . 82 CHAPTER II: ELEMENTS OF FUNCTIONAL ANALYSIS (ii).(a). In the real case, the above identity gives 2 2 2 kξ ± ηk = kξk + kηk ± 2 ξ η , so we immediately get 2 2 kξ + ηk − kξ − ηk = 4 ξ η . (b). This case is a particular instance of the Polarization Identity discussed in Remark 3.1. Corollary 3.1. Let X be a K-vector space equipped with an inner product · · . Then the map X × X 3 (ξ, η) 7−→ ξ η ∈ K is continuous, with respect to the product topologies. Proof. Immediate from the polarization identities. Corollary 3.2. Let X and Y be two K-vector spaces equipped with inner products · · and · · . If T : X → is an isometric linear map, X Y K then T ξ T η = ξ η , ∀ ξ, η ∈ X. Y X Proof. Immediate from the polarization identities. Exercise 2. Let X be a normed K-vector space. Assume the norm satisfies the Parallelogram Law. Prove that there exists an inner product · · on X, such that q kξk = ξ ξ , ∀ ξ ∈ X. Hint: Define the inner product by the Polarization Identity, and then prove that it is indeed an inner product. Proposition 3.3. Let X be a K-vector space, equipped with an inner product · · . Let Z be the completion of X with respect to the norm defined by the X inner product. Then Z carries a unique inner product · · , so that the norm Z on Z is defined by · · . Moreover, this inner product extends · · , in the Z X sense that hξi hηi = ξ η , ∀ ξ, η ∈ X. Z X Proof. It is obvious that the norm on Z satisfies the Parallelogram Law. We then apply Exercise 2. Definitions. Let K be one of the fields R or C.A Hilbert space over K is a K-vector space, equipped with an inner product, which is complete with respect to the norm defined by the inner product. Some textbooks use the term Euclidean for real Hilbert spaces, and reserve the term Hilbert only for the complex case. Examples 3.1. For J a non-empty set, the space `2 (J) is a Hilbert space. We K know that this is a Banach space. The inner product defining the norm is X α β = α(j)β(j), ∀ α, β ∈ `2 (J). K j∈J §3. Hilbert spaces 83 (The fact that function αβ : J → belongs to `1 (J) is discussed in Section 1, K K Exercise 5.) More generally, a Banach space whose norm satisfies the Parallelogram Law is a Hilbert space.
Recommended publications
  • HILBERT SPACE GEOMETRY Definition: a Vector Space Over Is a Set V (Whose Elements Are Called Vectors) Together with a Binary Operation
    HILBERT SPACE GEOMETRY Definition: A vector space over is a set V (whose elements are called vectors) together with a binary operation +:V×V→V, which is called vector addition, and an external binary operation ⋅: ×V→V, which is called scalar multiplication, such that (i) (V,+) is a commutative group (whose neutral element is called zero vector) and (ii) for all λ,µ∈ , x,y∈V: λ(µx)=(λµ)x, 1 x=x, λ(x+y)=(λx)+(λy), (λ+µ)x=(λx)+(µx), where the image of (x,y)∈V×V under + is written as x+y and the image of (λ,x)∈ ×V under ⋅ is written as λx or as λ⋅x. Exercise: Show that the set 2 together with vector addition and scalar multiplication defined by x y x + y 1 + 1 = 1 1 + x 2 y2 x 2 y2 x λx and λ 1 = 1 , λ x 2 x 2 respectively, is a vector space. 1 Remark: Usually we do not distinguish strictly between a vector space (V,+,⋅) and the set of its vectors V. For example, in the next definition V will first denote the vector space and then the set of its vectors. Definition: If V is a vector space and M⊆V, then the set of all linear combinations of elements of M is called linear hull or linear span of M. It is denoted by span(M). By convention, span(∅)={0}. Proposition: If V is a vector space, then the linear hull of any subset M of V (together with the restriction of the vector addition to M×M and the restriction of the scalar multiplication to ×M) is also a vector space.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • FUNCTIONAL ANALYSIS 1. Banach and Hilbert Spaces in What
    FUNCTIONAL ANALYSIS PIOTR HAJLASZ 1. Banach and Hilbert spaces In what follows K will denote R of C. Definition. A normed space is a pair (X, k · k), where X is a linear space over K and k · k : X → [0, ∞) is a function, called a norm, such that (1) kx + yk ≤ kxk + kyk for all x, y ∈ X; (2) kαxk = |α|kxk for all x ∈ X and α ∈ K; (3) kxk = 0 if and only if x = 0. Since kx − yk ≤ kx − zk + kz − yk for all x, y, z ∈ X, d(x, y) = kx − yk defines a metric in a normed space. In what follows normed paces will always be regarded as metric spaces with respect to the metric d. A normed space is called a Banach space if it is complete with respect to the metric d. Definition. Let X be a linear space over K (=R or C). The inner product (scalar product) is a function h·, ·i : X × X → K such that (1) hx, xi ≥ 0; (2) hx, xi = 0 if and only if x = 0; (3) hαx, yi = αhx, yi; (4) hx1 + x2, yi = hx1, yi + hx2, yi; (5) hx, yi = hy, xi, for all x, x1, x2, y ∈ X and all α ∈ K. As an obvious corollary we obtain hx, y1 + y2i = hx, y1i + hx, y2i, hx, αyi = αhx, yi , Date: February 12, 2009. 1 2 PIOTR HAJLASZ for all x, y1, y2 ∈ X and α ∈ K. For a space with an inner product we define kxk = phx, xi . Lemma 1.1 (Schwarz inequality).
    [Show full text]
  • Orthonormal Bases in Hilbert Space APPM 5440 Fall 2017 Applied Analysis
    Orthonormal Bases in Hilbert Space APPM 5440 Fall 2017 Applied Analysis Stephen Becker November 18, 2017(v4); v1 Nov 17 2014 and v2 Jan 21 2016 Supplementary notes to our textbook (Hunter and Nachtergaele). These notes generally follow Kreyszig’s book. The reason for these notes is that this is a simpler treatment that is easier to follow; the simplicity is because we generally do not consider uncountable nets, but rather only sequences (which are countable). I have tried to identify the theorems from both books; theorems/lemmas with numbers like 3.3-7 are from Kreyszig. Note: there are two versions of these notes, with and without proofs. The version without proofs will be distributed to the class initially, and the version with proofs will be distributed later (after any potential homework assignments, since some of the proofs maybe assigned as homework). This version does not have proofs. 1 Basic Definitions NOTE: Kreyszig defines inner products to be linear in the first term, and conjugate linear in the second term, so hαx, yi = αhx, yi = hx, αy¯ i. In contrast, Hunter and Nachtergaele define the inner product to be linear in the second term, and conjugate linear in the first term, so hαx,¯ yi = αhx, yi = hx, αyi. We will follow the convention of Hunter and Nachtergaele, so I have re-written the theorems from Kreyszig accordingly whenever the order is important. Of course when working with the real field, order is completely unimportant. Let X be an inner product space. Then we can define a norm on X by kxk = phx, xi, x ∈ X.
    [Show full text]
  • Lecture 4: April 8, 2021 1 Orthogonality and Orthonormality
    Mathematical Toolkit Spring 2021 Lecture 4: April 8, 2021 Lecturer: Avrim Blum (notes based on notes from Madhur Tulsiani) 1 Orthogonality and orthonormality Definition 1.1 Two vectors u, v in an inner product space are said to be orthogonal if hu, vi = 0. A set of vectors S ⊆ V is said to consist of mutually orthogonal vectors if hu, vi = 0 for all u 6= v, u, v 2 S. A set of S ⊆ V is said to be orthonormal if hu, vi = 0 for all u 6= v, u, v 2 S and kuk = 1 for all u 2 S. Proposition 1.2 A set S ⊆ V n f0V g consisting of mutually orthogonal vectors is linearly inde- pendent. Proposition 1.3 (Gram-Schmidt orthogonalization) Given a finite set fv1,..., vng of linearly independent vectors, there exists a set of orthonormal vectors fw1,..., wng such that Span (fw1,..., wng) = Span (fv1,..., vng) . Proof: By induction. The case with one vector is trivial. Given the statement for k vectors and orthonormal fw1,..., wkg such that Span (fw1,..., wkg) = Span (fv1,..., vkg) , define k u + u = v − hw , v i · w and w = k 1 . k+1 k+1 ∑ i k+1 i k+1 k k i=1 uk+1 We can now check that the set fw1,..., wk+1g satisfies the required conditions. Unit length is clear, so let’s check orthogonality: k uk+1, wj = vk+1, wj − ∑ hwi, vk+1i · wi, wj = vk+1, wj − wj, vk+1 = 0. i=1 Corollary 1.4 Every finite dimensional inner product space has an orthonormal basis.
    [Show full text]
  • Orthogonal Complements (Revised Version)
    Orthogonal Complements (Revised Version) Math 108A: May 19, 2010 John Douglas Moore 1 The dot product You will recall that the dot product was discussed in earlier calculus courses. If n x = (x1: : : : ; xn) and y = (y1: : : : ; yn) are elements of R , we define their dot product by x · y = x1y1 + ··· + xnyn: The dot product satisfies several key axioms: 1. it is symmetric: x · y = y · x; 2. it is bilinear: (ax + x0) · y = a(x · y) + x0 · y; 3. and it is positive-definite: x · x ≥ 0 and x · x = 0 if and only if x = 0. The dot product is an example of an inner product on the vector space V = Rn over R; inner products will be treated thoroughly in Chapter 6 of [1]. Recall that the length of an element x 2 Rn is defined by p jxj = x · x: Note that the length of an element x 2 Rn is always nonnegative. Cauchy-Schwarz Theorem. If x 6= 0 and y 6= 0, then x · y −1 ≤ ≤ 1: (1) jxjjyj Sketch of proof: If v is any element of Rn, then v · v ≥ 0. Hence (x(y · y) − y(x · y)) · (x(y · y) − y(x · y)) ≥ 0: Expanding using the axioms for dot product yields (x · x)(y · y)2 − 2(x · y)2(y · y) + (x · y)2(y · y) ≥ 0 or (x · x)(y · y)2 ≥ (x · y)2(y · y): 1 Dividing by y · y, we obtain (x · y)2 jxj2jyj2 ≥ (x · y)2 or ≤ 1; jxj2jyj2 and (1) follows by taking the square root.
    [Show full text]
  • Does Geometric Algebra Provide a Loophole to Bell's Theorem?
    Discussion Does Geometric Algebra provide a loophole to Bell’s Theorem? Richard David Gill 1 1 Leiden University, Faculty of Science, Mathematical Institute; [email protected] Version October 30, 2019 submitted to Entropy Abstract: Geometric Algebra, championed by David Hestenes as a universal language for physics, was used as a framework for the quantum mechanics of interacting qubits by Chris Doran, Anthony Lasenby and others. Independently of this, Joy Christian in 2007 claimed to have refuted Bell’s theorem with a local realistic model of the singlet correlations by taking account of the geometry of space as expressed through Geometric Algebra. A series of papers culminated in a book Christian (2014). The present paper first explores Geometric Algebra as a tool for quantum information and explains why it did not live up to its early promise. In summary, whereas the mapping between 3D geometry and the mathematics of one qubit is already familiar, Doran and Lasenby’s ingenious extension to a system of entangled qubits does not yield new insight but just reproduces standard QI computations in a clumsy way. The tensor product of two Clifford algebras is not a Clifford algebra. The dimension is too large, an ad hoc fix is needed, several are possible. I further analyse two of Christian’s earliest, shortest, least technical, and most accessible works (Christian 2007, 2011), exposing conceptual and algebraic errors. Since 2015, when the first version of this paper was posted to arXiv, Christian has published ambitious extensions of his theory in RSOS (Royal Society - Open Source), arXiv:1806.02392, and in IEEE Access, arXiv:1405.2355.
    [Show full text]
  • Ch 5: ORTHOGONALITY
    Ch 5: ORTHOGONALITY 5.5 Orthonormal Sets 1. a set fv1; v2; : : : ; vng is an orthogonal set if hvi; vji = 0, 81 ≤ i 6= j; ≤ n (note that orthogonality only makes sense in an inner product space since you need to inner product to check hvi; vji = 0. n For example fe1; e2; : : : ; eng is an orthogonal set in R . 2. fv1; v2; : : : ; vng is an orthogonal set =) v1; v2; : : : ; vn are linearly independent. 3. a set fv1; v2; : : : ; vng is an orthonormal set if hvi; vji = 0 and jjvijj = 1, 81 ≤ i 6= j; ≤ n (i.e. orthogonal and unit length). n For example fe1; e2; : : : ; eng is an orthonormal set in R . 4. given an orthogonal basis for a vector space V , we can always find an orthonormal basis for V by dividing each vector by its length (see Example 2 and 3 page 256) n 5. a space with an orthonormal basis behaves like the euclidean space R with the standard basis (it is easier to work with basis that are orthonormal, just like it is n easier to work with the standard basis versus other bases for R ) 6. if v is a vector in a inner product space V with fu1; u2; : : : ; ung an orthonormal basis, Pn Pn then we can write v = i=1 ciui = i=1hv; uiiui Pn 7. an easy way to find the inner product of two vectors: if u = i=1 aiui and v = Pn i=1 biui, where fu1; u2; : : : ; ung is an orthonormal basis for an inner product space Pn V , then hu; vi = i=1 aibi Pn 8.
    [Show full text]
  • MAS4107 Linear Algebra 2 Linear Maps And
    Introduction Groups and Fields Vector Spaces Subspaces, Linear . Bases and Coordinates MAS4107 Linear Algebra 2 Linear Maps and . Change of Basis Peter Sin More on Linear Maps University of Florida Linear Endomorphisms email: [email protected]fl.edu Quotient Spaces Spaces of Linear . General Prerequisites Direct Sums Minimal polynomial Familiarity with the notion of mathematical proof and some experience in read- Bilinear Forms ing and writing proofs. Familiarity with standard mathematical notation such as Hermitian Forms summations and notations of set theory. Euclidean and . Self-Adjoint Linear . Linear Algebra Prerequisites Notation Familiarity with the notion of linear independence. Gaussian elimination (reduction by row operations) to solve systems of equations. This is the most important algorithm and it will be assumed and used freely in the classes, for example to find JJ J I II coordinate vectors with respect to basis and to compute the matrix of a linear map, to test for linear dependence, etc. The determinant of a square matrix by cofactors Back and also by row operations. Full Screen Close Quit Introduction 0. Introduction Groups and Fields Vector Spaces These notes include some topics from MAS4105, which you should have seen in one Subspaces, Linear . form or another, but probably presented in a totally different way. They have been Bases and Coordinates written in a terse style, so you should read very slowly and with patience. Please Linear Maps and . feel free to email me with any questions or comments. The notes are in electronic Change of Basis form so sections can be changed very easily to incorporate improvements.
    [Show full text]
  • Spanning Sets the Only Algebraic Operations That Are Defined in a Vector Space V Are Those of Addition and Scalar Multiplication
    i i “main” 2007/2/16 page 258 i i 258 CHAPTER 4 Vector Spaces 23. Show that the set of all solutions to the nonhomoge- and let neous differential equation S1 + S2 = {v ∈ V : y + a1y + a2y = F(x), v = x + y for some x ∈ S1 and y ∈ S2} . where F(x) is nonzero on an interval I, is not a sub- 2 space of C (I). (a) Show that, in general, S1 ∪ S2 is not a subspace of V . 24. Let S1 and S2 be subspaces of a vector space V . Let (b) Show that S1 ∩ S2 is a subspace of V . S ∪ S ={v ∈ V : v ∈ S or v ∈ S }, 1 2 1 2 (c) Show that S1 + S2 is a subspace of V . S1 ∩ S2 ={v ∈ V : v ∈ S1 and v ∈ S2}, 4.4 Spanning Sets The only algebraic operations that are defined in a vector space V are those of addition and scalar multiplication. Consequently, the most general way in which we can combine the vectors v1, v2,...,vk in V is c1v1 + c2v2 +···+ckvk, (4.4.1) where c1,c2,...,ck are scalars. An expression of the form (4.4.1) is called a linear combination of v1, v2,...,vk. Since V is closed under addition and scalar multiplica- tion, it follows that the foregoing linear combination is itself a vector in V . One of the questions we wish to answer is whether every vector in a vector space can be obtained by taking linear combinations of a finite set of vectors. The following terminology is used in the case when the answer to this question is affirmative: DEFINITION 4.4.1 If every vector in a vector space V can be written as a linear combination of v1, v2, ..., vk, we say that V is spanned or generated by v1, v2, ..., vk and call the set of vectors {v1, v2,...,vk} a spanning set for V .
    [Show full text]
  • Using Functional Analysis and Sobolev Spaces to Solve Poisson’S Equation
    USING FUNCTIONAL ANALYSIS AND SOBOLEV SPACES TO SOLVE POISSON'S EQUATION YI WANG Abstract. We study Banach and Hilbert spaces with an eye to- wards defining weak solutions to elliptic PDE. Using Lax-Milgram we prove that weak solutions to Poisson's equation exist under certain conditions. Contents 1. Introduction 1 2. Banach spaces 2 3. Weak topology, weak star topology and reflexivity 6 4. Lower semicontinuity 11 5. Hilbert spaces 13 6. Sobolev spaces 19 References 21 1. Introduction We will discuss the following problem in this paper: let Ω be an open and connected subset in R and f be an L2 function on Ω, is there a solution to Poisson's equation (1) −∆u = f? From elementary partial differential equations class, we know if Ω = R, we can solve Poisson's equation using the fundamental solution to Laplace's equation. However, if we just take Ω to be an open and connected set, the above method is no longer useful. In addition, for arbitrary Ω and f, a C2 solution does not always exist. Therefore, instead of finding a strong solution, i.e., a C2 function which satisfies (1), we integrate (1) against a test function φ (a test function is a Date: September 28, 2016. 1 2 YI WANG smooth function compactly supported in Ω), integrate by parts, and arrive at the equation Z Z 1 (2) rurφ = fφ, 8φ 2 Cc (Ω): Ω Ω So intuitively we want to find a function which satisfies (2) for all test functions and this is the place where Hilbert spaces come into play.
    [Show full text]
  • Chapter IX. Tensors and Multilinear Forms
    Notes c F.P. Greenleaf and S. Marques 2006-2016 LAII-s16-quadforms.tex version 4/25/2016 Chapter IX. Tensors and Multilinear Forms. IX.1. Basic Definitions and Examples. 1.1. Definition. A bilinear form is a map B : V V C that is linear in each entry when the other entry is held fixed, so that × → B(αx, y) = αB(x, y)= B(x, αy) B(x + x ,y) = B(x ,y)+ B(x ,y) for all α F, x V, y V 1 2 1 2 ∈ k ∈ k ∈ B(x, y1 + y2) = B(x, y1)+ B(x, y2) (This of course forces B(x, y)=0 if either input is zero.) We say B is symmetric if B(x, y)= B(y, x), for all x, y and antisymmetric if B(x, y)= B(y, x). Similarly a multilinear form (aka a k-linear form , or a tensor− of rank k) is a map B : V V F that is linear in each entry when the other entries are held fixed. ×···×(0,k) → We write V = V ∗ . V ∗ for the set of k-linear forms. The reason we use V ∗ here rather than V , and⊗ the⊗ rationale for the “tensor product” notation, will gradually become clear. The set V ∗ V ∗ of bilinear forms on V becomes a vector space over F if we define ⊗ 1. Zero element: B(x, y) = 0 for all x, y V ; ∈ 2. Scalar multiple: (αB)(x, y)= αB(x, y), for α F and x, y V ; ∈ ∈ 3. Addition: (B + B )(x, y)= B (x, y)+ B (x, y), for x, y V .
    [Show full text]