Review of Hilbert and Banach Spaces

Total Page:16

File Type:pdf, Size:1020Kb

Review of Hilbert and Banach Spaces Review of Hilbert and Banach Spaces Definition 1 (Vector Space) A vector space over C is a set V equipped with two operations, (v, w) ∈ V × V 7→ v + w ∈V (α, v) ∈ C × V 7→ αv ∈V called addition and scalar multiplication, respectively, that obey the following axioms. Additive Axioms. There is an element 0 ∈ V and, for each x ∈ V there is an element −x ∈V such that, for all x, y, z ∈V, (i) x + y = y + x (ii) (x + y) + z = x +(y + z) (iii) 0 + x = x + 0 = x (iv) (−x) + x = x +(−x) = 0 Multiplicative Axioms. For every x ∈V and α, β ∈ C, (v) 0x = 0 (vi) 1x = x (vii) (αβ)x = α(βx) Distributive Axioms. For every x, y ∈V and α, β ∈ C, (viii) α(x + y) = αx + αy (ix) (α + β)x = αx + βx Definition 2 (Subspace) A subset W of a vector space V is called a linear subspace of V if it is closed under addition and scalar multiplication. That is, if x + y ∈ W and αx ∈ W for all x, y ∈ W and all α ∈ C. Then W is itself a vector space over C. Definition 3 (Inner Product) (a) A inner product on a vector space V is a function (x, y) ∈ V × V 7→ hx, yi ∈ C that obeys (i) (Linearity in the second argument) hx,αyi = α hx, yi, hx, y + zi = hx, yi+hx, zi (ii) (Conjugate symmetry) hx, yi = hy, xi (iii) (Positive–definiteness) hx, xi > 0 if x 6= 0 for all x, y, z ∈V and α ∈ C. (b) Two vectors x and y are said to be orthogonal with respect to the inner product h· , ·i if hx, yi = 0. (c) We’ll use the terms “inner product space” or “pre–Hilbert space” to mean a vector space over C equipped with an inner product. September25,2018 ReviewofHilbertandBanachSpaces 1 Definition 4 (Norm) (a) A norm on a vector space V is a function x ∈ V 7→ kxk ∈ [0, ∞) that obeys (i) kxk = 0 if and only if x = 0. (ii) kαxk = |α|kxk (iii) kx + yk≤kxk + kyk for all x, y ∈V and α ∈ C. (b) A sequence vn n IN ⊂V is said to be Cauchy with respect to the norm k·k if ∈ ∀ε > 0 ∃N ∈ IN s.t. m, n > N =⇒ kvn − vmk < ε (c) A sequence vn n IN ⊂V is said to converge to v in the norm k·k if ∈ ∀ε > 0 ∃N ∈ IN s.t. n > N =⇒ kvn − vk < ε (d) A normed vector space is said to be complete if every Cauchy sequence converges. (e) A subset D of a normed vector space V is said to be dense in V if D = V, where D is the closure of D. That is, if every element of V is a limit of a sequence of elements of D. Theorem 5 Let h · , ·i be an inner product on a vector space V and set kxk = hx, xi for all x ∈V. p (a) The inner product is sesquilinear. That is, hx,αy + βzi = α hx, yi + β hx, zi hαx + βy, zi = α hx, zi + β hy, zi for all x, y, z ∈V and α, β ∈ C.(1) (b) kxk is a norm. (c) The inner product and associated norm obeys (i) (Cauchy–Schwarz inequality) hx, yi ≤kxk kyk 2 2 2 2 (ii) (Parallelogram law) kx + yk + kx − yk =2kxk +2kyk (iii) (Polarization identities) 1 2 2 2 1 2 2 2 hx, yi = 2 kx + yk −kxk −kyk + 2i kx + iyk −kxk −kyk 1 2 2 1 2 2 = 4 kx + yk −kx − yk + 4i kx + iyk −kx − iyk for all x, y ∈V (1) Physicists and mathematical physicists generally use the convention that inner products are linear in the second argument and conjugate linear in the first. Some mathematicians use the convention that inner products are linear in the first argument and conjugate linear in the second. September25,2018 ReviewofHilbertandBanachSpaces 2 Lemma 6 Let k·k be a norm on a vector space V. There exists an inner product h · , · i on V such that hx, xi = kxk2 for all x ∈V if and only if k · k obeys the parallelogram law. Definition 7 (Banach Space) (a) A Banach space is a complete normed vector space. (b) Two Banach spaces B1 and B2 are said to be isometric if there exists a map U : B1 → B2 that is (i) linear (meaning that U(αx+βy) = αU(x)+βU(y) for all x, y ∈ B1 and α, β ∈ C) (ii) onto (a.k.a. surjective) (iii) isometric (meaning that kUxk 2 = kxk 1 for all x ∈ B1). This implies that U is B B 1–1 (a.k.a. injective). Definition 8 (Hilbert Space) (a) A Hilbert space H is a complex inner product space that is complete under the asso- ciated norm. (b) Two Hilbert spaces H1 and H2 are said to be isomorphic (denoted H1 =∼ H2) if there exists a map U : H1 → H2 that is (i) linear (ii) onto (iii) inner product preserving (meaning that hUx,Uyi 2 = hx, yi 1 for all x, y ∈ H1) H H Such a map is called unitary. Lemma. If H1 and H2 are two Hilbert spaces and if U : H1 → H2 is linear, onto and isometric, then U is unitary. Theorem 9 (Completion) If V, h· , ·i is any inner product space, then there exists V a Hilbert space H, h·, , ·i and a map U : V → H such that H (i) U is 1–1 (ii) U is linear (iii) hUx,Uyi = hx, yi for all x, y ∈V H V (iv) U(V) = Ux x ∈V is dense in H. If V is complete, then U(V) = H. H is called the completion of V. September25,2018 ReviewofHilbertandBanachSpaces 3 Example 10 n (a) C = x = (x1, ··· xn) x1, ··· xn ∈ C together with the inner product hx, yi = n x¯ℓyℓ is a Hilbert space. ℓP=1 p p (b) If 1 ≤ p < ∞, then ℓ = (xn)n IN {xn}n IN ⊂ C, n∞=1 |xn| < ∞ together with ∈ 1/p ∈ p P the norm (xn)n IN = ∞ |xn| is a Banach space. ∈ p n=1 h P i 2 2 (c) ℓ = (xn)n IN ∞ |xn| < ∞ is a Hilbert space with the inner product ∈ n=1 (xn)n IN,(yn)n IN = P∞ xnyn. ∈ ∈ n=1 P (d) ℓ∞ = (xn)n IN sup|xn| < ∞ and c0 = (xn)n IN lim xn =0 are both Banach ∈ ∈ n n →∞ spaces with the norm (xn)n IN = sup|xn|. ∈ ∞ n Theorem Let 1 ≤ p,q,r ≤∞ and let x =(xn)n IN, y =(yn)n IN. Then ∈ ∈ p p (a) (Minkowski inequality) If x, y ∈ ℓ , then x+y ∈ ℓ and kx+ykp ≤kxkp +kykp p q r (b) (H¨older inequality) If x ∈ ℓ and y ∈ ℓ , then (xnyn)n IN ∈ ℓ and we have ∈ k(xnyn)n INkr ≤kxkpkykq ∈ (e) Let X be a metric space (or more generally a topological space) and C(X ) = f : X → C f continuous, bounded C0(X ) = f : X → C f continuous, compact support If X is a subset of IRn or Cn for some n ∈ IN, let C (X ) = f : X → C f continuous, lim f(x)=0 ∞ x | |→∞ Then C(X ) and C (X ) are Banach spaces with the norm kfk = sup |f(x)|. C0(X ) is a ∞ x normed vector space, but need not be complete. ∈X (f) Let 1 ≤ p ≤∞. Let (X, M,µ) be a measure space, with X a set, M a σ–algebra and µ a measure. For p < ∞, set Lp(X, M,µ) = ϕ : X → C ϕ is M–measurable and |ϕ(x)|p dµ(x) < ∞ 1/p R p kϕkp = |ϕ(x)| dµ(x) h Z i For p = ∞, set(2) L∞(X, M,µ) = ϕ : X → C ϕ is M–measurable and ess sup |ϕ(x)| < ∞ kϕk = ess sup |ϕ(x)| ∞ (2) The essential supremum of |ϕ|, with respect to the measure µ, is denoted ess supx∈X |ϕ(x)| and is defined by inf{ a ≥ 0 | |ϕ(x)|≤ a almost everywhere, with respect to µ }. September25,2018 ReviewofHilbertandBanachSpaces 4 This is not quite a Banach space because any function ϕ that is zero almost everywhere has “norm” zero. So we define an equivalence relation on Lp(X, M,µ) by ϕ ∼ ψ ⇐⇒ ϕ = ψ a.e. As usual, the equivalence class of ϕ ∈Lp(X, M,µ) is [ϕ] = ψ ∈Lp(X, M,µ) ψ ∼ ϕ Then Lp(X, M,µ) = [ϕ] ϕ ∈Lp(X, M,µ) is a Banach space with [ϕ]+[ψ]=[ϕ + ψ] a[ϕ]=[aϕ] [ϕ] p = kϕkp for all ϕ, ψ ∈ Lp(X, M,µ) and a ∈ C, and L2(X, M,µ) is a Hilbert space with inner product h[ϕ], [ψ]i = ϕ(x) ψ(x) dµ(x) Z for all ϕ, ψ ∈L2(X, M,µ). It is standard to write ϕ in place of [ϕ]. If X is a Lebesgue measurable subset of IR, then Lp(X) denotes the Lp(X, M,µ) with (M,µ) being Lebesgue measure. Theorem Let 1 ≤ p,q,r ≤∞. (a) (Minkowski inequality) If f,g ∈ Lp(X, M,µ), then f + g ∈ Lp(X, M,µ) and kf + gkLp ≤kfkLp + kgkLp (b) (H¨older inequality) If f ∈ Lp(X, M,µ) and g ∈ Lq(X, M,µ), then we have r fg ∈ L (X, M,µ) and kfgkLr ≤kfkLp kgkLq (g) Let D be an open subset of C.
Recommended publications
  • The Grassmann Manifold
    The Grassmann Manifold 1. For vector spaces V and W denote by L(V; W ) the vector space of linear maps from V to W . Thus L(Rk; Rn) may be identified with the space Rk£n of k £ n matrices. An injective linear map u : Rk ! V is called a k-frame in V . The set k n GFk;n = fu 2 L(R ; R ): rank(u) = kg of k-frames in Rn is called the Stiefel manifold. Note that the special case k = n is the general linear group: k k GLk = fa 2 L(R ; R ) : det(a) 6= 0g: The set of all k-dimensional (vector) subspaces ¸ ½ Rn is called the Grassmann n manifold of k-planes in R and denoted by GRk;n or sometimes GRk;n(R) or n GRk(R ). Let k ¼ : GFk;n ! GRk;n; ¼(u) = u(R ) denote the map which assigns to each k-frame u the subspace u(Rk) it spans. ¡1 For ¸ 2 GRk;n the fiber (preimage) ¼ (¸) consists of those k-frames which form a basis for the subspace ¸, i.e. for any u 2 ¼¡1(¸) we have ¡1 ¼ (¸) = fu ± a : a 2 GLkg: Hence we can (and will) view GRk;n as the orbit space of the group action GFk;n £ GLk ! GFk;n :(u; a) 7! u ± a: The exercises below will prove the following n£k Theorem 2. The Stiefel manifold GFk;n is an open subset of the set R of all n £ k matrices. There is a unique differentiable structure on the Grassmann manifold GRk;n such that the map ¼ is a submersion.
    [Show full text]
  • 2 Hilbert Spaces You Should Have Seen Some Examples Last Semester
    2 Hilbert spaces You should have seen some examples last semester. The simplest (finite-dimensional) ex- C • A complex Hilbert space H is a complete normed space over whose norm is derived from an ample is Cn with its standard inner product. It’s worth recalling from linear algebra that if V is inner product. That is, we assume that there is a sesquilinear form ( , ): H H C, linear in · · × → an n-dimensional (complex) vector space, then from any set of n linearly independent vectors we the first variable and conjugate linear in the second, such that can manufacture an orthonormal basis e1, e2,..., en using the Gram-Schmidt process. In terms of this basis we can write any v V in the form (f ,д) = (д, f ), ∈ v = a e , a = (v, e ) (f , f ) 0 f H, and (f , f ) = 0 = f = 0. i i i i ≥ ∀ ∈ ⇒ ∑ The norm and inner product are related by which can be derived by taking the inner product of the equation v = aiei with ei. We also have n ∑ (f , f ) = f 2. ∥ ∥ v 2 = a 2. ∥ ∥ | i | i=1 We will always assume that H is separable (has a countable dense subset). ∑ Standard infinite-dimensional examples are l2(N) or l2(Z), the space of square-summable As usual for a normed space, the distance on H is given byd(f ,д) = f д = (f д, f д). • • ∥ − ∥ − − sequences, and L2(Ω) where Ω is a measurable subset of Rn. The Cauchy-Schwarz and triangle inequalities, • √ (f ,д) f д , f + д f + д , | | ≤ ∥ ∥∥ ∥ ∥ ∥ ≤ ∥ ∥ ∥ ∥ 2.1 Orthogonality can be derived fairly easily from the inner product.
    [Show full text]
  • FUNCTIONAL ANALYSIS 1. Banach and Hilbert Spaces in What
    FUNCTIONAL ANALYSIS PIOTR HAJLASZ 1. Banach and Hilbert spaces In what follows K will denote R of C. Definition. A normed space is a pair (X, k · k), where X is a linear space over K and k · k : X → [0, ∞) is a function, called a norm, such that (1) kx + yk ≤ kxk + kyk for all x, y ∈ X; (2) kαxk = |α|kxk for all x ∈ X and α ∈ K; (3) kxk = 0 if and only if x = 0. Since kx − yk ≤ kx − zk + kz − yk for all x, y, z ∈ X, d(x, y) = kx − yk defines a metric in a normed space. In what follows normed paces will always be regarded as metric spaces with respect to the metric d. A normed space is called a Banach space if it is complete with respect to the metric d. Definition. Let X be a linear space over K (=R or C). The inner product (scalar product) is a function h·, ·i : X × X → K such that (1) hx, xi ≥ 0; (2) hx, xi = 0 if and only if x = 0; (3) hαx, yi = αhx, yi; (4) hx1 + x2, yi = hx1, yi + hx2, yi; (5) hx, yi = hy, xi, for all x, x1, x2, y ∈ X and all α ∈ K. As an obvious corollary we obtain hx, y1 + y2i = hx, y1i + hx, y2i, hx, αyi = αhx, yi , Date: February 12, 2009. 1 2 PIOTR HAJLASZ for all x, y1, y2 ∈ X and α ∈ K. For a space with an inner product we define kxk = phx, xi . Lemma 1.1 (Schwarz inequality).
    [Show full text]
  • Functional Analysis Lecture Notes Chapter 3. Banach
    FUNCTIONAL ANALYSIS LECTURE NOTES CHAPTER 3. BANACH SPACES CHRISTOPHER HEIL 1. Elementary Properties and Examples Notation 1.1. Throughout, F will denote either the real line R or the complex plane C. All vector spaces are assumed to be over the field F. Definition 1.2. Let X be a vector space over the field F. Then a semi-norm on X is a function k · k: X ! R such that (a) kxk ≥ 0 for all x 2 X, (b) kαxk = jαj kxk for all x 2 X and α 2 F, (c) Triangle Inequality: kx + yk ≤ kxk + kyk for all x, y 2 X. A norm on X is a semi-norm which also satisfies: (d) kxk = 0 =) x = 0. A vector space X together with a norm k · k is called a normed linear space, a normed vector space, or simply a normed space. Definition 1.3. Let I be a finite or countable index set (for example, I = f1; : : : ; Ng if finite, or I = N or Z if infinite). Let w : I ! [0; 1). Given a sequence of scalars x = (xi)i2I , set 1=p jx jp w(i)p ; 0 < p < 1; 8 i kxkp;w = > Xi2I <> sup jxij w(i); p = 1; i2I > where these quantities could be infinite.:> Then we set p `w(I) = x = (xi)i2I : kxkp < 1 : n o p p p We call `w(I) a weighted ` space, and often denote it just by `w (especially if I = N). If p p w(i) = 1 for all i, then we simply call this space ` (I) or ` and write k · kp instead of k · kp;w.
    [Show full text]
  • Elements of Linear Algebra, Topology, and Calculus
    00˙AMS September 23, 2007 © Copyright, Princeton University Press. No part of this book may be distributed, posted, or reproduced in any form by digital or mechanical means without prior written permission of the publisher. Appendix A Elements of Linear Algebra, Topology, and Calculus A.1 LINEAR ALGEBRA We follow the usual conventions of matrix computations. Rn×p is the set of all n p real matrices (m rows and p columns). Rn is the set Rn×1 of column vectors× with n real entries. A(i, j) denotes the i, j entry (ith row, jth column) of the matrix A. Given A Rm×n and B Rn×p, the matrix product Rm×p ∈ n ∈ AB is defined by (AB)(i, j) = k=1 A(i, k)B(k, j), i = 1, . , m, j = 1∈, . , p. AT is the transpose of the matrix A:(AT )(i, j) = A(j, i). The entries A(i, i) form the diagonal of A. A Pmatrix is square if is has the same number of rows and columns. When A and B are square matrices of the same dimension, [A, B] = AB BA is termed the commutator of A and B. A matrix A is symmetric if −AT = A and skew-symmetric if AT = A. The commutator of two symmetric matrices or two skew-symmetric matrices− is symmetric, and the commutator of a symmetric and a skew-symmetric matrix is skew-symmetric. The trace of A is the sum of the diagonal elements of A, min(n,p ) tr(A) = A(i, i). i=1 X We have the following properties (assuming that A and B have adequate dimensions) tr(A) = tr(AT ), (A.1a) tr(AB) = tr(BA), (A.1b) tr([A, B]) = 0, (A.1c) tr(B) = 0 if BT = B, (A.1d) − tr(AB) = 0 if AT = A and BT = B.
    [Show full text]
  • Orthogonal Complements (Revised Version)
    Orthogonal Complements (Revised Version) Math 108A: May 19, 2010 John Douglas Moore 1 The dot product You will recall that the dot product was discussed in earlier calculus courses. If n x = (x1: : : : ; xn) and y = (y1: : : : ; yn) are elements of R , we define their dot product by x · y = x1y1 + ··· + xnyn: The dot product satisfies several key axioms: 1. it is symmetric: x · y = y · x; 2. it is bilinear: (ax + x0) · y = a(x · y) + x0 · y; 3. and it is positive-definite: x · x ≥ 0 and x · x = 0 if and only if x = 0. The dot product is an example of an inner product on the vector space V = Rn over R; inner products will be treated thoroughly in Chapter 6 of [1]. Recall that the length of an element x 2 Rn is defined by p jxj = x · x: Note that the length of an element x 2 Rn is always nonnegative. Cauchy-Schwarz Theorem. If x 6= 0 and y 6= 0, then x · y −1 ≤ ≤ 1: (1) jxjjyj Sketch of proof: If v is any element of Rn, then v · v ≥ 0. Hence (x(y · y) − y(x · y)) · (x(y · y) − y(x · y)) ≥ 0: Expanding using the axioms for dot product yields (x · x)(y · y)2 − 2(x · y)2(y · y) + (x · y)2(y · y) ≥ 0 or (x · x)(y · y)2 ≥ (x · y)2(y · y): 1 Dividing by y · y, we obtain (x · y)2 jxj2jyj2 ≥ (x · y)2 or ≤ 1; jxj2jyj2 and (1) follows by taking the square root.
    [Show full text]
  • Using Functional Analysis and Sobolev Spaces to Solve Poisson’S Equation
    USING FUNCTIONAL ANALYSIS AND SOBOLEV SPACES TO SOLVE POISSON'S EQUATION YI WANG Abstract. We study Banach and Hilbert spaces with an eye to- wards defining weak solutions to elliptic PDE. Using Lax-Milgram we prove that weak solutions to Poisson's equation exist under certain conditions. Contents 1. Introduction 1 2. Banach spaces 2 3. Weak topology, weak star topology and reflexivity 6 4. Lower semicontinuity 11 5. Hilbert spaces 13 6. Sobolev spaces 19 References 21 1. Introduction We will discuss the following problem in this paper: let Ω be an open and connected subset in R and f be an L2 function on Ω, is there a solution to Poisson's equation (1) −∆u = f? From elementary partial differential equations class, we know if Ω = R, we can solve Poisson's equation using the fundamental solution to Laplace's equation. However, if we just take Ω to be an open and connected set, the above method is no longer useful. In addition, for arbitrary Ω and f, a C2 solution does not always exist. Therefore, instead of finding a strong solution, i.e., a C2 function which satisfies (1), we integrate (1) against a test function φ (a test function is a Date: September 28, 2016. 1 2 YI WANG smooth function compactly supported in Ω), integrate by parts, and arrive at the equation Z Z 1 (2) rurφ = fφ, 8φ 2 Cc (Ω): Ω Ω So intuitively we want to find a function which satisfies (2) for all test functions and this is the place where Hilbert spaces come into play.
    [Show full text]
  • LINEAR ALGEBRA FALL 2007/08 PROBLEM SET 9 SOLUTIONS In
    MATH 110: LINEAR ALGEBRA FALL 2007/08 PROBLEM SET 9 SOLUTIONS In the following V will denote a finite-dimensional vector space over R that is also an inner product space with inner product denoted by h ·; · i. The norm induced by h ·; · i will be denoted k · k. 1. Let S be a subset (not necessarily a subspace) of V . We define the orthogonal annihilator of S, denoted S?, to be the set S? = fv 2 V j hv; wi = 0 for all w 2 Sg: (a) Show that S? is always a subspace of V . ? Solution. Let v1; v2 2 S . Then hv1; wi = 0 and hv2; wi = 0 for all w 2 S. So for any α; β 2 R, hαv1 + βv2; wi = αhv1; wi + βhv2; wi = 0 ? for all w 2 S. Hence αv1 + βv2 2 S . (b) Show that S ⊆ (S?)?. Solution. Let w 2 S. For any v 2 S?, we have hv; wi = 0 by definition of S?. Since this is true for all v 2 S? and hw; vi = hv; wi, we see that hw; vi = 0 for all v 2 S?, ie. w 2 S?. (c) Show that span(S) ⊆ (S?)?. Solution. Since (S?)? is a subspace by (a) (it is the orthogonal annihilator of S?) and S ⊆ (S?)? by (b), we have span(S) ⊆ (S?)?. ? ? (d) Show that if S1 and S2 are subsets of V and S1 ⊆ S2, then S2 ⊆ S1 . ? Solution. Let v 2 S2 . Then hv; wi = 0 for all w 2 S2 and so for all w 2 S1 (since ? S1 ⊆ S2).
    [Show full text]
  • Linear Spaces
    Chapter 2 Linear Spaces Contents FieldofScalars ........................................ ............. 2.2 VectorSpaces ........................................ .............. 2.3 Subspaces .......................................... .............. 2.5 Sumofsubsets........................................ .............. 2.5 Linearcombinations..................................... .............. 2.6 Linearindependence................................... ................ 2.7 BasisandDimension ..................................... ............. 2.7 Convexity ............................................ ............ 2.8 Normedlinearspaces ................................... ............... 2.9 The `p and Lp spaces ............................................. 2.10 Topologicalconcepts ................................... ............... 2.12 Opensets ............................................ ............ 2.13 Closedsets........................................... ............. 2.14 Boundedsets......................................... .............. 2.15 Convergence of sequences . ................... 2.16 Series .............................................. ............ 2.17 Cauchysequences .................................... ................ 2.18 Banachspaces....................................... ............... 2.19 Completesubsets ....................................... ............. 2.19 Transformations ...................................... ............... 2.21 Lineartransformations.................................. ................ 2.21
    [Show full text]
  • A Guided Tour to the Plane-Based Geometric Algebra PGA
    A Guided Tour to the Plane-Based Geometric Algebra PGA Leo Dorst University of Amsterdam Version 1.15{ July 6, 2020 Planes are the primitive elements for the constructions of objects and oper- ators in Euclidean geometry. Triangulated meshes are built from them, and reflections in multiple planes are a mathematically pure way to construct Euclidean motions. A geometric algebra based on planes is therefore a natural choice to unify objects and operators for Euclidean geometry. The usual claims of `com- pleteness' of the GA approach leads us to hope that it might contain, in a single framework, all representations ever designed for Euclidean geometry - including normal vectors, directions as points at infinity, Pl¨ucker coordinates for lines, quaternions as 3D rotations around the origin, and dual quaternions for rigid body motions; and even spinors. This text provides a guided tour to this algebra of planes PGA. It indeed shows how all such computationally efficient methods are incorporated and related. We will see how the PGA elements naturally group into blocks of four coordinates in an implementation, and how this more complete under- standing of the embedding suggests some handy choices to avoid extraneous computations. In the unified PGA framework, one never switches between efficient representations for subtasks, and this obviously saves any time spent on data conversions. Relative to other treatments of PGA, this text is rather light on the mathematics. Where you see careful derivations, they involve the aspects of orientation and magnitude. These features have been neglected by authors focussing on the mathematical beauty of the projective nature of the algebra.
    [Show full text]
  • Basic Differentiable Calculus Review
    Basic Differentiable Calculus Review Jimmie Lawson Department of Mathematics Louisiana State University Spring, 2003 1 Introduction Basic facts about the multivariable differentiable calculus are needed as back- ground for differentiable geometry and its applications. The purpose of these notes is to recall some of these facts. We state the results in the general context of Banach spaces, although we are specifically concerned with the finite-dimensional setting, specifically that of Rn. Let U be an open set of Rn. A function f : U ! R is said to be a Cr map for 0 ≤ r ≤ 1 if all partial derivatives up through order r exist for all points of U and are continuous. In the extreme cases C0 means that f is continuous and C1 means that all partials of all orders exists and are continuous on m r r U. A function f : U ! R is a C map if fi := πif is C for i = 1; : : : ; m, m th where πi : R ! R is the i projection map defined by πi(x1; : : : ; xm) = xi. It is a standard result that mixed partials of degree less than or equal to r and of the same type up to interchanges of order are equal for a C r-function (sometimes called Clairaut's Theorem). We can consider a category with objects nonempty open subsets of Rn for various n and morphisms Cr-maps. This is indeed a category, since the composition of Cr maps is again a Cr map. 1 2 Normed Spaces and Bounded Linear Oper- ators At the heart of the differential calculus is the notion of a differentiable func- tion.
    [Show full text]
  • Hilbert Space Methods for Partial Differential Equations
    Hilbert Space Methods for Partial Differential Equations R. E. Showalter Electronic Journal of Differential Equations Monograph 01, 1994. i Preface This book is an outgrowth of a course which we have given almost pe- riodically over the last eight years. It is addressed to beginning graduate students of mathematics, engineering, and the physical sciences. Thus, we have attempted to present it while presupposing a minimal background: the reader is assumed to have some prior acquaintance with the concepts of “lin- ear” and “continuous” and also to believe L2 is complete. An undergraduate mathematics training through Lebesgue integration is an ideal background but we dare not assume it without turning away many of our best students. The formal prerequisite consists of a good advanced calculus course and a motivation to study partial differential equations. A problem is called well-posed if for each set of data there exists exactly one solution and this dependence of the solution on the data is continuous. To make this precise we must indicate the space from which the solution is obtained, the space from which the data may come, and the correspond- ing notion of continuity. Our goal in this book is to show that various types of problems are well-posed. These include boundary value problems for (stationary) elliptic partial differential equations and initial-boundary value problems for (time-dependent) equations of parabolic, hyperbolic, and pseudo-parabolic types. Also, we consider some nonlinear elliptic boundary value problems, variational or uni-lateral problems, and some methods of numerical approximation of solutions. We briefly describe the contents of the various chapters.
    [Show full text]