6 Hilbert Spaces

Total Page:16

File Type:pdf, Size:1020Kb

6 Hilbert Spaces Robert Oeckl FA NOTES 6 01/06/2010 1 6 Hilbert Spaces 6.1 The Fréchet-Riesz Representation Theorem Denition 6.1. Let X be an inner product space. A pair of vectors x; y 2 X is called orthogonal i hx; yi = 0. We write x ? y. A pair of subsets A; B ⊆ X is called orthogonal i x ? y for all x 2 A and y 2 B. Moreover, if A ⊆ X is some subset we dene its orthogonal complement to be A? := fy 2 X : x ? y 8x 2 Ag: Exercise 30. Let X be an inner product space. 1. Let x; y 2 X. If x ? y then kxk2 + kyk2 = kx + yk2. 2. Let A ⊆ X be a subset. Then A? is a closed subspace of X. 3. A ⊆ (A?)?. ? 4. A? = (span A) . 5. A \ A? ⊆ f0g. Proposition 6.2. Let H be a Hilbert space, F ⊆ H a closed and convex subset and x 2 H. Then, there exists a unique element x~ 2 F such that kx~ − xk = inf ky − xk: y2F Proof. Dene a := infy2F ky −xk. Let fyngn2N be a sequence in F such that 2 limn!1 kyn − xk = a. Let > 0 and choose n0 2 N such that kyn − xk ≤ 2 a + for all n ≥ n0. Now let n; m ≥ n0. Then, using the parallelogram equality of Theorem 2.39 we nd ky − y k2 = 2ky − xk2 + 2ky − xk2 − ky + y − 2xk2 n m n m n m 2 2 2 yn + ym = 2kyn − xk + 2kym − xk − 4 − x 2 ≤ 2(a2 + ) + 2(a2 + ) − 4a2 = 4 This shows that fyngn2N is a Cauchy sequence which must converge to some vector x~ 2 F with the desired properties since F is complete. It remains to show that x~ is unique. Suppose x;~ x~0 2 F both satisfy the condition. Then, by a similar use of the parallelogram equation as above, 0 2 0 2 2 0 2 x~ +x ~ 2 2 2 kx~−x~ k = 2kx~−xk +2kx~ −xk −4 − x ≤ 2a +2a −4a = 0: 2 That is, x~0 =x ~, completing the proof. 2 Robert Oeckl FA NOTES 6 01/06/2010 Lemma 6.3. Let H be a Hilbert space, F ⊆ H a closed and convex subset, x 2 H and x~ 2 H. Then, the following are equivalent: 1. kx~ − xk = infy2F ky − xk 2. <hx~ − y; x~ − xi ≤ 0 8y 2 F Proof. Suppose 2. holds. Then, for any y 2 F we have ky − xk2 = k(y − x~) + (~x − x)k2 = ky − x~k2 + 2<hy − x;~ x~ − xi + kx~ − xk2 ≥ kx~ − xk2: Conversely, suppose 1. holds. Fix y 2 F and consider the continuous map [0; 1] ! F given by t 7! yt := (1 − t)~x + ty. Then, 2 2 2 kx~ − xk ≤ kyt − xk = kt(y − x~) + (~x − x)k = t2ky − x~k2 + 2t<hy − x;~ x~ − xi + kx~ − xk2: Subtracting kx~ − xk2 and dividing for t 2 (0; 1] by t leads to, 1 tky − x~k2 ≥ <hx~ − y; x~ − xi: 2 This implies 2. Lemma 6.4. Let H be a Hilbert space, F ⊆ H a closed subspace, x 2 H and x~ 2 F . Then, the following are equivalent: 1. kx~ − xk = infy2F ky − xk 2. hy; x~ − xi = 0 8y 2 F Proof. Exercise. Proposition 6.5. Let H be a Hilbert space, F ⊆ H a closed proper subspace. Then, F ? =6 f0g. Proof. Since F is proper, there exists x 2 H n F . By Proposition 6.2 there exists an element x~ 2 F such that kx~ − xk = infy2F ky − xk. By Lemma 6.4, hy; x~ − xi = 0 for all y 2 F . That is, x~ − x 2 F ?. Theorem 6.6 (Fréchet-Riesz Representation Theorem). Let H be a Hilbert space. Then, the map Φ: H ! H∗ given by (Φ(x))(y) := hy; xi for all x; y 2 H is anti-linear, bijective and isometric. Robert Oeckl FA NOTES 6 01/06/2010 3 Proof. The anti-linearity of Φ follows from the properties of the scalar prod- uct. Observe that for all 2 , k k jh ij ≤ k k because of x H Φ(x) = supkyk=1 y; x x the Schwarz inequality (Theorem 2.35). On the other hand, (Φ(x))(x=kxk) = kxk for all x 2 H n f0g. Hence, kΦ(x)k = kxk for all x 2 H, i.e., Φ is isomet- ric. It remains to show that Φ is surjective. Let f 2 H∗ n f0g. Then ker f is a closed proper subspace of H and by Proposition 6.5 there exists a vector v 2 (ker f)? n f0g. Observe that for all x 2 H, f(x) x − v 2 ker f: f(v) Hence, f(x) f(x) f(x) hx; vi = x − v + v; v = hv; vi f(v) f(v) f(v) In particular, setting w := f(v)=(kvk2) we see that Φ(w) = f. Corollary 6.7. Let H be a Hilbert space. Then, H∗ is also a Hilbert space. Moreover, H is reexive, i.e., H∗∗ is naturally isomorphic to H. Proof. By Theorem 6.6 the spaces H and H0 are isometric. This implies in particular, that H0 is complete that its norm satises the parallelogram equality, i.e., that it is a Hilbert space. Indeed, it is easily veried that the inner product is given by hΦ(x); Φ(y)iH0 = hy; xiH 8x; y 2 H: ∗∗ Consider the canonical linear map iH : H ! H . It is easily veried that ∗ ∗∗ iH = Ψ ◦ Φ, where Ψ: H ! H is the corresponding map of Theorem 6.6. Thus, iH is a linear bijective isometry, i.e., an isomorphism of Hilbert spaces. 6.2 Orthogonal Projectors Theorem 6.8. Let H be a Hilbert space and F ⊆ H a closed subspace such that F =6 f0g. Then, there exists a unique operator PF 2 CL(H; H) with the following properties: 1. PF jF = 1F . ? 2. ker PF = F . Moreover, PF also has the following properties: 4 Robert Oeckl FA NOTES 6 01/06/2010 3. PF (H) = F . 4. PF ◦ PF = PF . 5. kPF k = 1. 6. Given x 2 H, PF (x) is the unique element of F such that kPF (x)−xk = infy2F ky − xk. 7. Given x 2 H, PF (x) is the unique element of F such that x − P (x) 2 F ?. Proof. We dene PF to be the map x 7! x~ given by Proposition 6.2. Then, clearly PF (H) = F and PF (x) = x if x 2 F and thus PF ◦ PF = PF . By ? ? Lemma 6.4 we have PF (x) − x 2 F for all x 2 H. Since F is a subspace we have ? (λ1PF (x1) − λ1x1) + (λ2PF (x2) − λ2x2) 2 F for x1; x2 2 H and λ1; λ2 2 K arbitrary. Rewriting this we get, ? (λ1PF (x1) − λ2PF (x2)) − (λ1x1 + λ2x2) 2 F : But Lemma 6.4 also implies that if given x 2 H we have z − x 2 F ? for some z 2 F , then z = PF (x). Thus, λ1PF (x1) − λ2PF (x2) = PF (λ1x1 + λ2x2): ? That is, PF is linear. Using again that x−PF (x) 2 F we have x−PF (x) ? PF (x) and hence the Pythagoras equality (Exercise 30.1) 2 2 2 kx − PF (x)k + kPF (x)k = kxk 8x 2 H: This implies kPF (x)k ≤ kxk for all x 2 H. In particular, PF is continuous. On the other hand kPF (x)k = kxk if x 2 F . Therefore, kPF k = 1. Now suppose x 2 ker PF . Then, hy; xi = −hy; PF (x) − xi = 0 for all y 2 F and ? ? ? hence x 2 F . That is, ker PF ⊆ F . Conversely, suppose now x 2 F . ? Then, hy; PF (x)i = hy; PF (x)−xi = 0 for all y 2 F . Thus, PF (x) 2 F . But ? we know already that PF (x) 2 F . Since, F \ F = f0g we get PF (x) = 0, ? ? i.e., x 2 ker PF . Then, F ⊆ ker PF . Thus, ker PF = F . This concludes the proof the the existence of PF with properties 1, 2, 3, 4, 5, 6 and 7. Suppose now there is another operator QF 2 CL(H; H) which also has the properties 1 and 2. We proceed to show that QF = PF . To this end, consider the operator 1 − PF 2 CL(H; H). We then have (1 − PF )(x) = x − ? ? PF (x) 2 F for all x 2 H. Thus, (1−PF )(H) ⊆ F . Since PF +(1−PF ) = 1 Robert Oeckl FA NOTES 6 01/06/2010 5 ? and PF (H) = F we must have H = F + F . Thus, given x 2 H there are ? x1 2 F and x2 2 F such that x = x1 + x2. By property 1 we have PF (x1) = QF (x1) and by property 2 we have PF (x2) = QF (x2). Hence, PF (x) = QF (x). Denition 6.9. Given a Hilbert space H and a closed subspace F , the operator PF 2 CL(H; H) constructed in Theorem 6.8 is called the orthogonal projector onto the subspace F . Corollary 6.10. Let H be a Hilbert space and F a closed subspace. Let PF be the associated orthogonal projector. Then 1 − PF is the orthogonal projector onto ?. That is, − . F PF ? = 1 PF ? ? Proof. Let x 2 F . Then, (1 − PF )(x) = x since ker PF = F by The- orem 6.8.1. That is, − j .
Recommended publications
  • Math 651 Homework 1 - Algebras and Groups Due 2/22/2013
    Math 651 Homework 1 - Algebras and Groups Due 2/22/2013 1) Consider the Lie Group SU(2), the group of 2 × 2 complex matrices A T with A A = I and det(A) = 1. The underlying set is z −w jzj2 + jwj2 = 1 (1) w z with the standard S3 topology. The usual basis for su(2) is 0 i 0 −1 i 0 X = Y = Z = (2) i 0 1 0 0 −i (which are each i times the Pauli matrices: X = iσx, etc.). a) Show this is the algebra of purely imaginary quaternions, under the commutator bracket. b) Extend X to a left-invariant field and Y to a right-invariant field, and show by computation that the Lie bracket between them is zero. c) Extending X, Y , Z to global left-invariant vector fields, give SU(2) the metric g(X; X) = g(Y; Y ) = g(Z; Z) = 1 and all other inner products zero. Show this is a bi-invariant metric. d) Pick > 0 and set g(X; X) = 2, leaving g(Y; Y ) = g(Z; Z) = 1. Show this is left-invariant but not bi-invariant. p 2) The realification of an n × n complex matrix A + −1B is its assignment it to the 2n × 2n matrix A −B (3) BA Any n × n quaternionic matrix can be written A + Bk where A and B are complex matrices. Its complexification is the 2n × 2n complex matrix A −B (4) B A a) Show that the realification of complex matrices and complexifica- tion of quaternionic matrices are algebra homomorphisms.
    [Show full text]
  • The Grassmann Manifold
    The Grassmann Manifold 1. For vector spaces V and W denote by L(V; W ) the vector space of linear maps from V to W . Thus L(Rk; Rn) may be identified with the space Rk£n of k £ n matrices. An injective linear map u : Rk ! V is called a k-frame in V . The set k n GFk;n = fu 2 L(R ; R ): rank(u) = kg of k-frames in Rn is called the Stiefel manifold. Note that the special case k = n is the general linear group: k k GLk = fa 2 L(R ; R ) : det(a) 6= 0g: The set of all k-dimensional (vector) subspaces ¸ ½ Rn is called the Grassmann n manifold of k-planes in R and denoted by GRk;n or sometimes GRk;n(R) or n GRk(R ). Let k ¼ : GFk;n ! GRk;n; ¼(u) = u(R ) denote the map which assigns to each k-frame u the subspace u(Rk) it spans. ¡1 For ¸ 2 GRk;n the fiber (preimage) ¼ (¸) consists of those k-frames which form a basis for the subspace ¸, i.e. for any u 2 ¼¡1(¸) we have ¡1 ¼ (¸) = fu ± a : a 2 GLkg: Hence we can (and will) view GRk;n as the orbit space of the group action GFk;n £ GLk ! GFk;n :(u; a) 7! u ± a: The exercises below will prove the following n£k Theorem 2. The Stiefel manifold GFk;n is an open subset of the set R of all n £ k matrices. There is a unique differentiable structure on the Grassmann manifold GRk;n such that the map ¼ is a submersion.
    [Show full text]
  • 2 Hilbert Spaces You Should Have Seen Some Examples Last Semester
    2 Hilbert spaces You should have seen some examples last semester. The simplest (finite-dimensional) ex- C • A complex Hilbert space H is a complete normed space over whose norm is derived from an ample is Cn with its standard inner product. It’s worth recalling from linear algebra that if V is inner product. That is, we assume that there is a sesquilinear form ( , ): H H C, linear in · · × → an n-dimensional (complex) vector space, then from any set of n linearly independent vectors we the first variable and conjugate linear in the second, such that can manufacture an orthonormal basis e1, e2,..., en using the Gram-Schmidt process. In terms of this basis we can write any v V in the form (f ,д) = (д, f ), ∈ v = a e , a = (v, e ) (f , f ) 0 f H, and (f , f ) = 0 = f = 0. i i i i ≥ ∀ ∈ ⇒ ∑ The norm and inner product are related by which can be derived by taking the inner product of the equation v = aiei with ei. We also have n ∑ (f , f ) = f 2. ∥ ∥ v 2 = a 2. ∥ ∥ | i | i=1 We will always assume that H is separable (has a countable dense subset). ∑ Standard infinite-dimensional examples are l2(N) or l2(Z), the space of square-summable As usual for a normed space, the distance on H is given byd(f ,д) = f д = (f д, f д). • • ∥ − ∥ − − sequences, and L2(Ω) where Ω is a measurable subset of Rn. The Cauchy-Schwarz and triangle inequalities, • √ (f ,д) f д , f + д f + д , | | ≤ ∥ ∥∥ ∥ ∥ ∥ ≤ ∥ ∥ ∥ ∥ 2.1 Orthogonality can be derived fairly easily from the inner product.
    [Show full text]
  • The Representation Theorem of Persistence Revisited and Generalized
    Journal of Applied and Computational Topology https://doi.org/10.1007/s41468-018-0015-3 The representation theorem of persistence revisited and generalized René Corbet1 · Michael Kerber1 Received: 9 November 2017 / Accepted: 15 June 2018 © The Author(s) 2018 Abstract The Representation Theorem by Zomorodian and Carlsson has been the starting point of the study of persistent homology under the lens of representation theory. In this work, we give a more accurate statement of the original theorem and provide a complete and self-contained proof. Furthermore, we generalize the statement from the case of linear sequences of R-modules to R-modules indexed over more general monoids. This generalization subsumes the Representation Theorem of multidimen- sional persistence as a special case. Keywords Computational topology · Persistence modules · Persistent homology · Representation theory Mathematics Subject Classifcation 06F25 · 16D90 · 16W50 · 55U99 · 68W30 1 Introduction Persistent homology, introduced by Edelsbrunner et al. (2002), is a multi-scale extension of classical homology theory. The idea is to track how homological fea- tures appear and disappear in a shape when the scale parameter is increasing. This data can be summarized by a barcode where each bar corresponds to a homology class that appears in the process and represents the range of scales where the class is present. The usefulness of this paradigm in the context of real-world data sets has led to the term topological data analysis; see the surveys (Carlsson 2009; Ghrist 2008; Edelsbrunner and Morozov 2012; Vejdemo-Johansson 2014; Kerber 2016) and textbooks (Edelsbrunner and Harer 2010; Oudot 2015) for various use cases. * René Corbet [email protected] Michael Kerber [email protected] 1 Institute of Geometry, Graz University of Technology, Kopernikusgasse 24, 8010 Graz, Austria Vol.:(0123456789)1 3 R.
    [Show full text]
  • FUNCTIONAL ANALYSIS 1. Banach and Hilbert Spaces in What
    FUNCTIONAL ANALYSIS PIOTR HAJLASZ 1. Banach and Hilbert spaces In what follows K will denote R of C. Definition. A normed space is a pair (X, k · k), where X is a linear space over K and k · k : X → [0, ∞) is a function, called a norm, such that (1) kx + yk ≤ kxk + kyk for all x, y ∈ X; (2) kαxk = |α|kxk for all x ∈ X and α ∈ K; (3) kxk = 0 if and only if x = 0. Since kx − yk ≤ kx − zk + kz − yk for all x, y, z ∈ X, d(x, y) = kx − yk defines a metric in a normed space. In what follows normed paces will always be regarded as metric spaces with respect to the metric d. A normed space is called a Banach space if it is complete with respect to the metric d. Definition. Let X be a linear space over K (=R or C). The inner product (scalar product) is a function h·, ·i : X × X → K such that (1) hx, xi ≥ 0; (2) hx, xi = 0 if and only if x = 0; (3) hαx, yi = αhx, yi; (4) hx1 + x2, yi = hx1, yi + hx2, yi; (5) hx, yi = hy, xi, for all x, x1, x2, y ∈ X and all α ∈ K. As an obvious corollary we obtain hx, y1 + y2i = hx, y1i + hx, y2i, hx, αyi = αhx, yi , Date: February 12, 2009. 1 2 PIOTR HAJLASZ for all x, y1, y2 ∈ X and α ∈ K. For a space with an inner product we define kxk = phx, xi . Lemma 1.1 (Schwarz inequality).
    [Show full text]
  • Orthogonal Complements (Revised Version)
    Orthogonal Complements (Revised Version) Math 108A: May 19, 2010 John Douglas Moore 1 The dot product You will recall that the dot product was discussed in earlier calculus courses. If n x = (x1: : : : ; xn) and y = (y1: : : : ; yn) are elements of R , we define their dot product by x · y = x1y1 + ··· + xnyn: The dot product satisfies several key axioms: 1. it is symmetric: x · y = y · x; 2. it is bilinear: (ax + x0) · y = a(x · y) + x0 · y; 3. and it is positive-definite: x · x ≥ 0 and x · x = 0 if and only if x = 0. The dot product is an example of an inner product on the vector space V = Rn over R; inner products will be treated thoroughly in Chapter 6 of [1]. Recall that the length of an element x 2 Rn is defined by p jxj = x · x: Note that the length of an element x 2 Rn is always nonnegative. Cauchy-Schwarz Theorem. If x 6= 0 and y 6= 0, then x · y −1 ≤ ≤ 1: (1) jxjjyj Sketch of proof: If v is any element of Rn, then v · v ≥ 0. Hence (x(y · y) − y(x · y)) · (x(y · y) − y(x · y)) ≥ 0: Expanding using the axioms for dot product yields (x · x)(y · y)2 − 2(x · y)2(y · y) + (x · y)2(y · y) ≥ 0 or (x · x)(y · y)2 ≥ (x · y)2(y · y): 1 Dividing by y · y, we obtain (x · y)2 jxj2jyj2 ≥ (x · y)2 or ≤ 1; jxj2jyj2 and (1) follows by taking the square root.
    [Show full text]
  • Using Functional Analysis and Sobolev Spaces to Solve Poisson’S Equation
    USING FUNCTIONAL ANALYSIS AND SOBOLEV SPACES TO SOLVE POISSON'S EQUATION YI WANG Abstract. We study Banach and Hilbert spaces with an eye to- wards defining weak solutions to elliptic PDE. Using Lax-Milgram we prove that weak solutions to Poisson's equation exist under certain conditions. Contents 1. Introduction 1 2. Banach spaces 2 3. Weak topology, weak star topology and reflexivity 6 4. Lower semicontinuity 11 5. Hilbert spaces 13 6. Sobolev spaces 19 References 21 1. Introduction We will discuss the following problem in this paper: let Ω be an open and connected subset in R and f be an L2 function on Ω, is there a solution to Poisson's equation (1) −∆u = f? From elementary partial differential equations class, we know if Ω = R, we can solve Poisson's equation using the fundamental solution to Laplace's equation. However, if we just take Ω to be an open and connected set, the above method is no longer useful. In addition, for arbitrary Ω and f, a C2 solution does not always exist. Therefore, instead of finding a strong solution, i.e., a C2 function which satisfies (1), we integrate (1) against a test function φ (a test function is a Date: September 28, 2016. 1 2 YI WANG smooth function compactly supported in Ω), integrate by parts, and arrive at the equation Z Z 1 (2) rurφ = fφ, 8φ 2 Cc (Ω): Ω Ω So intuitively we want to find a function which satisfies (2) for all test functions and this is the place where Hilbert spaces come into play.
    [Show full text]
  • LINEAR ALGEBRA FALL 2007/08 PROBLEM SET 9 SOLUTIONS In
    MATH 110: LINEAR ALGEBRA FALL 2007/08 PROBLEM SET 9 SOLUTIONS In the following V will denote a finite-dimensional vector space over R that is also an inner product space with inner product denoted by h ·; · i. The norm induced by h ·; · i will be denoted k · k. 1. Let S be a subset (not necessarily a subspace) of V . We define the orthogonal annihilator of S, denoted S?, to be the set S? = fv 2 V j hv; wi = 0 for all w 2 Sg: (a) Show that S? is always a subspace of V . ? Solution. Let v1; v2 2 S . Then hv1; wi = 0 and hv2; wi = 0 for all w 2 S. So for any α; β 2 R, hαv1 + βv2; wi = αhv1; wi + βhv2; wi = 0 ? for all w 2 S. Hence αv1 + βv2 2 S . (b) Show that S ⊆ (S?)?. Solution. Let w 2 S. For any v 2 S?, we have hv; wi = 0 by definition of S?. Since this is true for all v 2 S? and hw; vi = hv; wi, we see that hw; vi = 0 for all v 2 S?, ie. w 2 S?. (c) Show that span(S) ⊆ (S?)?. Solution. Since (S?)? is a subspace by (a) (it is the orthogonal annihilator of S?) and S ⊆ (S?)? by (b), we have span(S) ⊆ (S?)?. ? ? (d) Show that if S1 and S2 are subsets of V and S1 ⊆ S2, then S2 ⊆ S1 . ? Solution. Let v 2 S2 . Then hv; wi = 0 for all w 2 S2 and so for all w 2 S1 (since ? S1 ⊆ S2).
    [Show full text]
  • Representations of Direct Products of Finite Groups
    Pacific Journal of Mathematics REPRESENTATIONS OF DIRECT PRODUCTS OF FINITE GROUPS BURTON I. FEIN Vol. 20, No. 1 September 1967 PACIFIC JOURNAL OF MATHEMATICS Vol. 20, No. 1, 1967 REPRESENTATIONS OF DIRECT PRODUCTS OF FINITE GROUPS BURTON FEIN Let G be a finite group and K an arbitrary field. We denote by K(G) the group algebra of G over K. Let G be the direct product of finite groups Gι and G2, G — G1 x G2, and let Mi be an irreducible iΓ(G;)-module, i— 1,2. In this paper we study the structure of Mί9 M2, the outer tensor product of Mi and M2. While Mi, M2 is not necessarily an irreducible K(G)~ module, we prove below that it is completely reducible and give criteria for it to be irreducible. These results are applied to the question of whether the tensor product of division algebras of a type arising from group representation theory is a division algebra. We call a division algebra D over K Z'-derivable if D ~ Hom^s) (Mf M) for some finite group G and irreducible ϋΓ(G)- module M. If B(K) is the Brauer group of K, the set BQ(K) of classes of central simple Z-algebras having division algebra components which are iΓ-derivable forms a subgroup of B(K). We show also that B0(K) has infinite index in B(K) if K is an algebraic number field which is not an abelian extension of the rationale. All iΓ(G)-modules considered are assumed to be unitary finite dimensional left if((τ)-modules.
    [Show full text]
  • A Guided Tour to the Plane-Based Geometric Algebra PGA
    A Guided Tour to the Plane-Based Geometric Algebra PGA Leo Dorst University of Amsterdam Version 1.15{ July 6, 2020 Planes are the primitive elements for the constructions of objects and oper- ators in Euclidean geometry. Triangulated meshes are built from them, and reflections in multiple planes are a mathematically pure way to construct Euclidean motions. A geometric algebra based on planes is therefore a natural choice to unify objects and operators for Euclidean geometry. The usual claims of `com- pleteness' of the GA approach leads us to hope that it might contain, in a single framework, all representations ever designed for Euclidean geometry - including normal vectors, directions as points at infinity, Pl¨ucker coordinates for lines, quaternions as 3D rotations around the origin, and dual quaternions for rigid body motions; and even spinors. This text provides a guided tour to this algebra of planes PGA. It indeed shows how all such computationally efficient methods are incorporated and related. We will see how the PGA elements naturally group into blocks of four coordinates in an implementation, and how this more complete under- standing of the embedding suggests some handy choices to avoid extraneous computations. In the unified PGA framework, one never switches between efficient representations for subtasks, and this obviously saves any time spent on data conversions. Relative to other treatments of PGA, this text is rather light on the mathematics. Where you see careful derivations, they involve the aspects of orientation and magnitude. These features have been neglected by authors focussing on the mathematical beauty of the projective nature of the algebra.
    [Show full text]
  • Inner Product Spaces
    CHAPTER 6 Woman teaching geometry, from a fourteenth-century edition of Euclid’s geometry book. Inner Product Spaces In making the definition of a vector space, we generalized the linear structure (addition and scalar multiplication) of R2 and R3. We ignored other important features, such as the notions of length and angle. These ideas are embedded in the concept we now investigate, inner products. Our standing assumptions are as follows: 6.1 Notation F, V F denotes R or C. V denotes a vector space over F. LEARNING OBJECTIVES FOR THIS CHAPTER Cauchy–Schwarz Inequality Gram–Schmidt Procedure linear functionals on inner product spaces calculating minimum distance to a subspace Linear Algebra Done Right, third edition, by Sheldon Axler 164 CHAPTER 6 Inner Product Spaces 6.A Inner Products and Norms Inner Products To motivate the concept of inner prod- 2 3 x1 , x 2 uct, think of vectors in R and R as x arrows with initial point at the origin. x R2 R3 H L The length of a vector in or is called the norm of x, denoted x . 2 k k Thus for x .x1; x2/ R , we have The length of this vector x is p D2 2 2 x x1 x2 . p 2 2 x1 x2 . k k D C 3 C Similarly, if x .x1; x2; x3/ R , p 2D 2 2 2 then x x1 x2 x3 . k k D C C Even though we cannot draw pictures in higher dimensions, the gener- n n alization to R is obvious: we define the norm of x .x1; : : : ; xn/ R D 2 by p 2 2 x x1 xn : k k D C C The norm is not linear on Rn.
    [Show full text]
  • Chapter 1: Complex Numbers Lecture Notes Math Section
    CORE Metadata, citation and similar papers at core.ac.uk Provided by Almae Matris Studiorum Campus Chapter 1: Complex Numbers Lecture notes Math Section 1.1: Definition of Complex Numbers Definition of a complex number A complex number is a number that can be expressed in the form z = a + bi, where a and b are real numbers and i is the imaginary unit, that satisfies the equation i2 = −1. In this expression, a is the real part Re(z) and b is the imaginary part Im(z) of the complex number. The complex number a + bi can be identified with the point (a; b) in the complex plane. A complex number whose real part is zero is said to be purely imaginary, whereas a complex number whose imaginary part is zero is a real number. Ex.1 Understanding complex numbers Write the real part and the imaginary part of the following complex numbers and plot each number in the complex plane. (1) i (2) 4 + 2i (3) 1 − 3i (4) −2 Section 1.2: Operations with Complex Numbers Addition and subtraction of two complex numbers To add/subtract two complex numbers we add/subtract each part separately: (a + bi) + (c + di) = (a + c) + (b + d)i and (a + bi) − (c + di) = (a − c) + (b − d)i Ex.1 Addition and subtraction of complex numbers (1) (9 + i) + (2 − 3i) (2) (−2 + 4i) − (6 + 3i) (3) (i) − (−11 + 2i) (4) (1 + i) + (4 + 9i) Multiplication of two complex numbers To multiply two complex numbers we proceed as follows: (a + bi)(c + di) = ac + adi + bci + bdi2 = ac + adi + bci − bd = (ac − bd) + (ad + bc)i Ex.2 Multiplication of complex numbers (1) (3 + 2i)(1 + 7i) (2) (i + 1)2 (3) (−4 + 3i)(2 − 5i) 1 Chapter 1: Complex Numbers Lecture notes Math Conjugate of a complex number The complex conjugate of the complex number z = a + bi is defined to be z¯ = a − bi.
    [Show full text]