Chapter 6 Euclidean Spaces

Total Page:16

File Type:pdf, Size:1020Kb

Chapter 6 Euclidean Spaces Chapter 6 Euclidean Spaces 6.1 Inner Products, Euclidean Spaces The framework of vector spaces allows us deal with ratios of vectors and linear combinations, but there is no way to express the notion of length of a line segment or to talk about orthogonality of vectors. AEuclideanstructurewillallowustodealwithmetric notions such as orthogonality and length (or distance). First, we define a Euclidean structure on a vector space. 419 420 CHAPTER 6. EUCLIDEAN SPACES Definition 6.1. ArealvectorspaceE is a Euclidean space i↵it is equipped with a symmetric bilinear form ': E E R which is also positive definite,which means⇥ that ! '(u, u) > 0, for every u =0. 6 More explicitly, ': E E R satisfies the following axioms: ⇥ ! '(u1 + u2,v)='(u1,v)+'(u2,v), '(u, v1 + v2)='(u, v1)+'(u, v2), '(λu, v)='(u, v), '(u, λv)='(u, v), '(u, v)='(v, u), u =0 impliesthat '(u, u) > 0. 6 The real number '(u, v)isalsocalledtheinner product (or scalar product) of u and v. 6.1. INNER PRODUCTS, EUCLIDEAN SPACES 421 We also define the quadratic form associated with ' as the function Φ: E R+ such that ! Φ(u)='(u, u), for all u E. 2 Since ' is bilinear, we have '(0, 0) = 0, and since it is positive definite, we have the stronger fact that '(u, u)=0 i↵ u =0, that is Φ(u)=0i↵u =0. Given an inner product ': E E R on a vector space E,wealsodenote'(u, v)by⇥ ! u v, or u, v , or (u v), · h i | and Φ(u)by u . k k p 422 CHAPTER 6. EUCLIDEAN SPACES Example 1. The standard example of a Euclidean space is Rn,undertheinnerproduct defined such that · (x ,...,x ) (y ,...,y )=x y + x y + + x y . 1 n · 1 n 1 1 2 2 ··· n n This Euclidean space is denoted by En. Example 2. Let E be a vector space of dimension 2, and let (e1,e2)beabasisofE. If a>0andb2 ac < 0, the bilinear form defined such that − '(x1e1+y1e2,x2e1+y2e2)=ax1x2+b(x1y2+x2y1)+cy1y2 yields a Euclidean structure on E. In this case, 2 2 Φ(xe1 + ye2)=ax +2bxy + cy . 6.1. INNER PRODUCTS, EUCLIDEAN SPACES 423 Example 3. Let [a, b]denotethesetofcontinuousfunc- C tions f :[a, b] R.Itiseasilycheckedthat [a, b]isa vector space of! infinite dimension. C Given any two functions f,g [a, b], let 2C b f,g = f(t)g(t)dt. h i Za We leave as an easy exercise that , is indeed an inner product on [a, b]. h i C When [a, b]=[ ⇡,⇡](or[a, b]=[0, 2⇡], this makes basically no di↵erence),− one should compute sin px, sin qx , sin px, cos qx , h i h i and cos px, cos qx , h i for all natural numbers p, q 1. The outcome of these calculations is what makes Fourier≥ analysis possible! 424 CHAPTER 6. EUCLIDEAN SPACES Example 4. Let E =Mn(R)bethevectorspaceofreal n n matrices. ⇥ If we view a matrix A Mn(R)asa“long”columnvector obtained by concatenating2 together its columns, we can define the inner product of two matrices A, B Mn(R) as 2 n A, B = a b , h i ij ij i,j=1 X which can be conveniently written as A, B =tr(A>B)=tr(B>A). h i Since this can be viewed as the Euclidean product on n2 R ,itisaninnerproductonMn(R). The corresponding norm A = tr(A A) k kF > q is the Frobenius norm (see Section 4.2). 6.1. INNER PRODUCTS, EUCLIDEAN SPACES 425 Let us observe that ' can be recovered from Φ. Indeed, by bilinearity and symmetry, we have Φ(u + v)='(u + v, u + v) = '(u, u + v)+'(v, u + v) = '(u, u)+2'(u, v)+'(v, v) =Φ(u)+2'(u, v)+Φ(v). Thus, we have 1 '(u, v)= [Φ(u + v) Φ(u) Φ(v)]. 2 − − We also say that ' is the polar form of Φ. One of the very important properties of an inner product ' is that the map u Φ(u)isanorm. 7! p 426 CHAPTER 6. EUCLIDEAN SPACES Proposition 6.1. Let E be a Euclidean space with inner product ' and quadratic form Φ. For all u, v E, we have the Cauchy-Schwarz inequality: 2 '(u, v)2 Φ(u)Φ(v), the equality holding i↵ u and v are linearly dependent. We also have the Minkovski inequality: Φ(u + v) Φ(u)+ Φ(v), the equalityp holding i↵ u andp v are linearlyp dependent, where in addition if u =0and v =0, then u = λv for some λ>0. 6 6 6.1. INNER PRODUCTS, EUCLIDEAN SPACES 427 Sketch of proof .DefinethefunctionT : R R,such that ! T (λ)=Φ(u + λv), for all λ R.Usingbilinearityandsymmetry,wecan show that2 Φ(u + λv)=Φ(u)+2'(u, v)+λ2Φ(v). Since ' is positive definite, we have T (λ) 0forall ≥ λ R. 2 If Φ(v)=0,thenv =0,andwealsohave'(u, v)=0. In this case, the Cauchy-Schwarz inequality is trivial, 428 CHAPTER 6. EUCLIDEAN SPACES If Φ(v) > 0, then λ2Φ(v)+2'(u, v)+Φ(u)=0 can’t have distinct roots, which means that its discrimi- nant ∆=4('(u, v)2 Φ(u)Φ(v)) − is zero or negative, which is precisely the Cauchy-Schwarz inequality. The Minkovski inequality can then be shown. 6.1. INNER PRODUCTS, EUCLIDEAN SPACES 429 The Minkovski inequality Φ(u + v) Φ(u)+ Φ(v) shows that thep map u p Φ(u)satisfiesthep triangle inequality,condition(N3)ofdefinition4.1,andsince7! ' p is bilinear and positive definite, it also satisfies conditions (N1) and (N2) of definition 4.1, and thus, it is a norm on E. The norm induced by ' is called the Euclidean norm induced by '. Note that the Cauchy-Schwarz inequality can be written as u v u v , | · |k kk k and the Minkovski inequality as u + v u + v . k kk k k k We now define orthogonality. 430 CHAPTER 6. EUCLIDEAN SPACES 6.2 Orthogonality, Duality, Adjoint Maps Definition 6.2. Given a Euclidean space E,anytwo vectors u, v E are orthogonal, or perpendicular i↵ 2 u v =0.Givenafamily(ui)i I of vectors in E,wesay · 2 that (ui)i I is orthogonal i↵ ui uj =0foralli, j I, 2 · 2 where i = j.Wesaythatthefamily(ui)i I is orthonor- mal i↵ 6u u =0foralli, j I,where2 i = j,and i · j 2 6 ui = ui ui =1,foralli I.ForanysubsetF of E, thek k set · 2 F ? = v E u v =0, for all u F , { 2 | · 2 } of all vectors orthogonal to all vectors in F ,iscalledthe orthogonal complement of F . Since inner products are positive definite, observe that for any vector u E,wehave 2 u v =0 forallv E i↵ u =0. · 2 It is immediately verified that the orthogonal complement F ? of F is a subspace of E. 6.2. ORTHOGONALITY, DUALITY, ADJOINT MAPS 431 Example 5. Going back to example 3, and to the inner product ⇡ f,g = f(t)g(t)dt h i ⇡ Z− on the vector space [ ⇡,⇡], it is easily checked that C − ⇡ if p = q, p, q 1, sin px, sin qx = ≥ h i 0ifp = q, p, q 1 ⇢ 6 ≥ ⇡ if p = q, p, q 1, cos px, cos qx = ≥ h i 0ifp = q, p, q 0 ⇢ 6 ≥ and sin px, cos qx =0, h i for all p 1andq 0, and of course, ≥⇡ ≥ 1, 1 = ⇡ dx =2⇡. h i − R As a consequence, the family (sin px)p 1 (cos qx)q 0 is orthogonal. ≥ [ ≥ It is not orthonormal, but becomes so if we divide every trigonometric function by p⇡,and1byp2⇡. 432 CHAPTER 6. EUCLIDEAN SPACES Proposition 6.2. Given a Euclidean space E, for any family (ui)i I of nonnull vectors in E, if (ui)i I is or- thogonal, then2 it is linearly independent. 2 Proposition 6.3. Given a Euclidean space E, any two vectors u, v E are orthogonal i↵ 2 u + v 2 = u 2 + v 2 . k k k k k k One of the most useful features of orthonormal bases is that they a↵ord a very simple method for computing the coordinates of a vector over any basis vector. 6.2. ORTHOGONALITY, DUALITY, ADJOINT MAPS 433 Indeed, assume that (e1,...,em)isanorthonormalbasis. For any vector x = x e + + x e , 1 1 ··· m m if we compute the inner product x e ,weget · i x e = x e e + + x e e + + x e e = x , · i 1 1 · i ··· i i · i ··· m m · i i since 1ifi = j, e e = i · j 0ifi = j, ⇢ 6 is the property characterizing an orthonormal family. Thus, x = x e , i · i which means that xiei =(x ei)ei is the orthogonal projec- tion of x onto the subspace· generated by the basis vector ei. If the basis is orthogonal but not necessarily orthonormal, then x ei x ei xi = · = · 2. ei ei e · k ik 434 CHAPTER 6. EUCLIDEAN SPACES All this is true even for an infinite orthonormal (or or- thogonal) basis (ei)i I. 2 However, remember that every vector x is expressed as alinearcombination x = xiei i I X2 where the family of scalars (xi)i I has finite support, 2 which means that xi =0foralli I J,whereJ is a finite set. 2 − 6.2. ORTHOGONALITY, DUALITY, ADJOINT MAPS 435 Thus, even though the family (sin px)p 1 (cos qx)q 0 is orthogonal (it is not orthonormal, but≥ becomes[ one≥ if we divide every trigonometric function by p⇡,and1by p2⇡;wewon’tbecauseitlooksmessy!),thefactthata function f 0[ ⇡,⇡]canbewrittenasaFourierseries as 2C − 1 f(x)=a0 + (ak cos kx + bk sin kx) Xk=1 does not mean that (sin px)p 1 (cos qx)q 0 is a basis of this vector space of functions,≥ [ because in≥ general, the families (ak)and(bk) do not have finite support! In order for this infinite linear combination to make sense, it is necessary to prove that the partial sums n a0 + (ak cos kx + bk sin kx) Xk=1 of the series converge to a limit when n goes to infinity.
Recommended publications
  • A New Description of Space and Time Using Clifford Multivectors
    A new description of space and time using Clifford multivectors James M. Chappell† , Nicolangelo Iannella† , Azhar Iqbal† , Mark Chappell‡ , Derek Abbott† †School of Electrical and Electronic Engineering, University of Adelaide, South Australia 5005, Australia ‡Griffith Institute, Griffith University, Queensland 4122, Australia Abstract Following the development of the special theory of relativity in 1905, Minkowski pro- posed a unified space and time structure consisting of three space dimensions and one time dimension, with relativistic effects then being natural consequences of this space- time geometry. In this paper, we illustrate how Clifford’s geometric algebra that utilizes multivectors to represent spacetime, provides an elegant mathematical framework for the study of relativistic phenomena. We show, with several examples, how the application of geometric algebra leads to the correct relativistic description of the physical phenomena being considered. This approach not only provides a compact mathematical representa- tion to tackle such phenomena, but also suggests some novel insights into the nature of time. Keywords: Geometric algebra, Clifford space, Spacetime, Multivectors, Algebraic framework 1. Introduction The physical world, based on early investigations, was deemed to possess three inde- pendent freedoms of translation, referred to as the three dimensions of space. This naive conclusion is also supported by more sophisticated analysis such as the existence of only five regular polyhedra and the inverse square force laws. If we lived in a world with four spatial dimensions, for example, we would be able to construct six regular solids, and in arXiv:1205.5195v2 [math-ph] 11 Oct 2012 five dimensions and above we would find only three [1].
    [Show full text]
  • 1 Lifts of Polytopes
    Lecture 5: Lifts of polytopes and non-negative rank CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Lifts of polytopes 1.1 Polytopes and inequalities Recall that the convex hull of a subset X n is defined by ⊆ conv X λx + 1 λ x0 : x; x0 X; λ 0; 1 : ( ) f ( − ) 2 2 [ ]g A d-dimensional convex polytope P d is the convex hull of a finite set of points in d: ⊆ P conv x1;:::; xk (f g) d for some x1;:::; xk . 2 Every polytope has a dual representation: It is a closed and bounded set defined by a family of linear inequalities P x d : Ax 6 b f 2 g for some matrix A m d. 2 × Let us define a measure of complexity for P: Define γ P to be the smallest number m such that for some C s d ; y s ; A m d ; b m, we have ( ) 2 × 2 2 × 2 P x d : Cx y and Ax 6 b : f 2 g In other words, this is the minimum number of inequalities needed to describe P. If P is full- dimensional, then this is precisely the number of facets of P (a facet is a maximal proper face of P). Thinking of γ P as a measure of complexity makes sense from the point of view of optimization: Interior point( methods) can efficiently optimize linear functions over P (to arbitrary accuracy) in time that is polynomial in γ P . ( ) 1.2 Lifts of polytopes Many simple polytopes require a large number of inequalities to describe.
    [Show full text]
  • The Invertible Matrix Theorem
    The Invertible Matrix Theorem Ryan C. Daileda Trinity University Linear Algebra Daileda The Invertible Matrix Theorem Introduction It is important to recognize when a square matrix is invertible. We can now characterize invertibility in terms of every one of the concepts we have now encountered. We will continue to develop criteria for invertibility, adding them to our list as we go. The invertibility of a matrix is also related to the invertibility of linear transformations, which we discuss below. Daileda The Invertible Matrix Theorem Theorem 1 (The Invertible Matrix Theorem) For a square (n × n) matrix A, TFAE: a. A is invertible. b. A has a pivot in each row/column. RREF c. A −−−→ I. d. The equation Ax = 0 only has the solution x = 0. e. The columns of A are linearly independent. f. Null A = {0}. g. A has a left inverse (BA = In for some B). h. The transformation x 7→ Ax is one to one. i. The equation Ax = b has a (unique) solution for any b. j. Col A = Rn. k. A has a right inverse (AC = In for some C). l. The transformation x 7→ Ax is onto. m. AT is invertible. Daileda The Invertible Matrix Theorem Inverse Transforms Definition A linear transformation T : Rn → Rn (also called an endomorphism of Rn) is called invertible iff it is both one-to-one and onto. If [T ] is the standard matrix for T , then we know T is given by x 7→ [T ]x. The Invertible Matrix Theorem tells us that this transformation is invertible iff [T ] is invertible.
    [Show full text]
  • Simplicial Complexes
    46 III Complexes III.1 Simplicial Complexes There are many ways to represent a topological space, one being a collection of simplices that are glued to each other in a structured manner. Such a collection can easily grow large but all its elements are simple. This is not so convenient for hand-calculations but close to ideal for computer implementations. In this book, we use simplicial complexes as the primary representation of topology. Rd k Simplices. Let u0; u1; : : : ; uk be points in . A point x = i=0 λiui is an affine combination of the ui if the λi sum to 1. The affine hull is the set of affine combinations. It is a k-plane if the k + 1 points are affinely Pindependent by which we mean that any two affine combinations, x = λiui and y = µiui, are the same iff λi = µi for all i. The k + 1 points are affinely independent iff P d P the k vectors ui − u0, for 1 ≤ i ≤ k, are linearly independent. In R we can have at most d linearly independent vectors and therefore at most d+1 affinely independent points. An affine combination x = λiui is a convex combination if all λi are non- negative. The convex hull is the set of convex combinations. A k-simplex is the P convex hull of k + 1 affinely independent points, σ = conv fu0; u1; : : : ; ukg. We sometimes say the ui span σ. Its dimension is dim σ = k. We use special names of the first few dimensions, vertex for 0-simplex, edge for 1-simplex, triangle for 2-simplex, and tetrahedron for 3-simplex; see Figure III.1.
    [Show full text]
  • Degrees of Freedom in Quadratic Goodness of Fit
    Submitted to the Annals of Statistics DEGREES OF FREEDOM IN QUADRATIC GOODNESS OF FIT By Bruce G. Lindsay∗, Marianthi Markatouy and Surajit Ray Pennsylvania State University, Columbia University, Boston University We study the effect of degrees of freedom on the level and power of quadratic distance based tests. The concept of an eigendepth index is introduced and discussed in the context of selecting the optimal de- grees of freedom, where optimality refers to high power. We introduce the class of diffusion kernels by the properties we seek these kernels to have and give a method for constructing them by exponentiating the rate matrix of a Markov chain. Product kernels and their spectral decomposition are discussed and shown useful for high dimensional data problems. 1. Introduction. Lindsay et al. (2008) developed a general theory for good- ness of fit testing based on quadratic distances. This class of tests is enormous, encompassing many of the tests found in the literature. It includes tests based on characteristic functions, density estimation, and the chi-squared tests, as well as providing quadratic approximations to many other tests, such as those based on likelihood ratios. The flexibility of the methodology is particularly important for enabling statisticians to readily construct tests for model fit in higher dimensions and in more complex data. ∗Supported by NSF grant DMS-04-05637 ySupported by NSF grant DMS-05-04957 AMS 2000 subject classifications: Primary 62F99, 62F03; secondary 62H15, 62F05 Keywords and phrases: Degrees of freedom, eigendepth, high dimensional goodness of fit, Markov diffusion kernels, quadratic distance, spectral decomposition in high dimensions 1 2 LINDSAY ET AL.
    [Show full text]
  • Lecture 4: April 8, 2021 1 Orthogonality and Orthonormality
    Mathematical Toolkit Spring 2021 Lecture 4: April 8, 2021 Lecturer: Avrim Blum (notes based on notes from Madhur Tulsiani) 1 Orthogonality and orthonormality Definition 1.1 Two vectors u, v in an inner product space are said to be orthogonal if hu, vi = 0. A set of vectors S ⊆ V is said to consist of mutually orthogonal vectors if hu, vi = 0 for all u 6= v, u, v 2 S. A set of S ⊆ V is said to be orthonormal if hu, vi = 0 for all u 6= v, u, v 2 S and kuk = 1 for all u 2 S. Proposition 1.2 A set S ⊆ V n f0V g consisting of mutually orthogonal vectors is linearly inde- pendent. Proposition 1.3 (Gram-Schmidt orthogonalization) Given a finite set fv1,..., vng of linearly independent vectors, there exists a set of orthonormal vectors fw1,..., wng such that Span (fw1,..., wng) = Span (fv1,..., vng) . Proof: By induction. The case with one vector is trivial. Given the statement for k vectors and orthonormal fw1,..., wkg such that Span (fw1,..., wkg) = Span (fv1,..., vkg) , define k u + u = v − hw , v i · w and w = k 1 . k+1 k+1 ∑ i k+1 i k+1 k k i=1 uk+1 We can now check that the set fw1,..., wk+1g satisfies the required conditions. Unit length is clear, so let’s check orthogonality: k uk+1, wj = vk+1, wj − ∑ hwi, vk+1i · wi, wj = vk+1, wj − wj, vk+1 = 0. i=1 Corollary 1.4 Every finite dimensional inner product space has an orthonormal basis.
    [Show full text]
  • Arxiv:1910.10745V1 [Cond-Mat.Str-El] 23 Oct 2019 2.2 Symmetry-Protected Time Crystals
    A Brief History of Time Crystals Vedika Khemania,b,∗, Roderich Moessnerc, S. L. Sondhid aDepartment of Physics, Harvard University, Cambridge, Massachusetts 02138, USA bDepartment of Physics, Stanford University, Stanford, California 94305, USA cMax-Planck-Institut f¨urPhysik komplexer Systeme, 01187 Dresden, Germany dDepartment of Physics, Princeton University, Princeton, New Jersey 08544, USA Abstract The idea of breaking time-translation symmetry has fascinated humanity at least since ancient proposals of the per- petuum mobile. Unlike the breaking of other symmetries, such as spatial translation in a crystal or spin rotation in a magnet, time translation symmetry breaking (TTSB) has been tantalisingly elusive. We review this history up to recent developments which have shown that discrete TTSB does takes place in periodically driven (Floquet) systems in the presence of many-body localization (MBL). Such Floquet time-crystals represent a new paradigm in quantum statistical mechanics — that of an intrinsically out-of-equilibrium many-body phase of matter with no equilibrium counterpart. We include a compendium of the necessary background on the statistical mechanics of phase structure in many- body systems, before specializing to a detailed discussion of the nature, and diagnostics, of TTSB. In particular, we provide precise definitions that formalize the notion of a time-crystal as a stable, macroscopic, conservative clock — explaining both the need for a many-body system in the infinite volume limit, and for a lack of net energy absorption or dissipation. Our discussion emphasizes that TTSB in a time-crystal is accompanied by the breaking of a spatial symmetry — so that time-crystals exhibit a novel form of spatiotemporal order.
    [Show full text]
  • Higher Dimensional Conundra
    Higher Dimensional Conundra Steven G. Krantz1 Abstract: In recent years, especially in the subject of harmonic analysis, there has been interest in geometric phenomena of RN as N → +∞. In the present paper we examine several spe- cific geometric phenomena in Euclidean space and calculate the asymptotics as the dimension gets large. 0 Introduction Typically when we do geometry we concentrate on a specific venue in a particular space. Often the context is Euclidean space, and often the work is done in R2 or R3. But in modern work there are many aspects of analysis that are linked to concrete aspects of geometry. And there is often interest in rendering the ideas in Hilbert space or some other infinite dimensional setting. Thus we want to see how the finite-dimensional result in RN changes as N → +∞. In the present paper we study some particular aspects of the geometry of RN and their asymptotic behavior as N →∞. We choose these particular examples because the results are surprising or especially interesting. We may hope that they will lead to further studies. It is a pleasure to thank Richard W. Cottle for a careful reading of an early draft of this paper and for useful comments. 1 Volume in RN Let us begin by calculating the volume of the unit ball in RN and the surface area of its bounding unit sphere. We let ΩN denote the former and ωN−1 denote the latter. In addition, we let Γ(x) be the celebrated Gamma function of L. Euler. It is a helpful intuition (which is literally true when x is an 1We are happy to thank the American Institute of Mathematics for its hospitality and support during this work.
    [Show full text]
  • Math 54. Selected Solutions for Week 8 Section 6.1 (Page 282) 22. Let U = U1 U2 U3 . Explain Why U · U ≥ 0. Wh
    Math 54. Selected Solutions for Week 8 Section 6.1 (Page 282) 2 3 u1 22. Let ~u = 4 u2 5 . Explain why ~u · ~u ≥ 0 . When is ~u · ~u = 0 ? u3 2 2 2 We have ~u · ~u = u1 + u2 + u3 , which is ≥ 0 because it is a sum of squares (all of which are ≥ 0 ). It is zero if and only if ~u = ~0 . Indeed, if ~u = ~0 then ~u · ~u = 0 , as 2 can be seen directly from the formula. Conversely, if ~u · ~u = 0 then all the terms ui must be zero, so each ui must be zero. This implies ~u = ~0 . 2 5 3 26. Let ~u = 4 −6 5 , and let W be the set of all ~x in R3 such that ~u · ~x = 0 . What 7 theorem in Chapter 4 can be used to show that W is a subspace of R3 ? Describe W in geometric language. The condition ~u · ~x = 0 is equivalent to ~x 2 Nul ~uT , and this is a subspace of R3 by Theorem 2 on page 187. Geometrically, it is the plane perpendicular to ~u and passing through the origin. 30. Let W be a subspace of Rn , and let W ? be the set of all vectors orthogonal to W . Show that W ? is a subspace of Rn using the following steps. (a). Take ~z 2 W ? , and let ~u represent any element of W . Then ~z · ~u = 0 . Take any scalar c and show that c~z is orthogonal to ~u . (Since ~u was an arbitrary element of W , this will show that c~z is in W ? .) ? (b).
    [Show full text]
  • Properties of Euclidean Space
    Section 3.1 Properties of Euclidean Space As has been our convention throughout this course, we use the notation R2 to refer to the plane (two dimensional space); R3 for three dimensional space; and Rn to indicate n dimensional space. Alternatively, these spaces are often referred to as Euclidean spaces; for example, \three dimensional Euclidean space" refers to R3. In this section, we will point out many of the special features common to the Euclidean spaces; in Chapter 4, we will see that many of these features are shared by other \spaces" as well. Many of the graphics in this section are drawn in two dimensions (largely because this is the easiest space to visualize on paper), but you should be aware that the ideas presented apply to n dimensional Euclidean space, not just to two dimensional space. Vectors in Euclidean Space When we refer to a vector in Euclidean space, we mean directed line segments that are embedded in the space, such as the vector pictured below in R2: We often refer to a vector, such as the vector AB shown below, by its initial and terminal points; in this example, A is the initial point of AB, and B is the terminal point of the vector. While we often think of n dimensional space as being made up of points, we may equivalently consider it to be made up of vectors by identifying points and vectors. For instance, we identify the point (2; 1) in two dimensional Euclidean space with the vector with initial point (0; 0) and terminal point (2; 1): 1 Section 3.1 In this way, we think of n dimensional Euclidean space as being made up of n dimensional vectors.
    [Show full text]
  • Orthogonality Handout
    3.8 (SUPPLEMENT) | ORTHOGONALITY OF EIGENFUNCTIONS We now develop some properties of eigenfunctions, to be used in Chapter 9 for Fourier Series and Partial Differential Equations. 1. Definition of Orthogonality R b We say functions f(x) and g(x) are orthogonal on a < x < b if a f(x)g(x) dx = 0 . [Motivation: Let's approximate the integral with a Riemann sum, as follows. Take a large integer N, put h = (b − a)=N and partition the interval a < x < b by defining x1 = a + h; x2 = a + 2h; : : : ; xN = a + Nh = b. Then Z b f(x)g(x) dx ≈ f(x1)g(x1)h + ··· + f(xN )g(xN )h a = (uN · vN )h where uN = (f(x1); : : : ; f(xN )) and vN = (g(x1); : : : ; g(xN )) are vectors containing the values of f and g. The vectors uN and vN are said to be orthogonal (or perpendicular) if their dot product equals zero (uN ·vN = 0), and so when we let N ! 1 in the above formula it makes R b sense to say the functions f and g are orthogonal when the integral a f(x)g(x) dx equals zero.] R π 1 2 π Example. sin x and cos x are orthogonal on −π < x < π, since −π sin x cos x dx = 2 sin x −π = 0. 2. Integration Lemma Suppose functions Xn(x) and Xm(x) satisfy the differential equations 00 Xn + λnXn = 0; a < x < b; 00 Xm + λmXm = 0; a < x < b; for some numbers λn; λm. Then Z b 0 0 b (λn − λm) Xn(x)Xm(x) dx = [Xn(x)Xm(x) − Xn(x)Xm(x)]a: a Proof.
    [Show full text]
  • About Symmetries in Physics
    LYCEN 9754 December 1997 ABOUT SYMMETRIES IN PHYSICS Dedicated to H. Reeh and R. Stora1 Fran¸cois Gieres Institut de Physique Nucl´eaire de Lyon, IN2P3/CNRS, Universit´eClaude Bernard 43, boulevard du 11 novembre 1918, F - 69622 - Villeurbanne CEDEX Abstract. The goal of this introduction to symmetries is to present some general ideas, to outline the fundamental concepts and results of the subject and to situate a bit the following arXiv:hep-th/9712154v1 16 Dec 1997 lectures of this school. [These notes represent the write-up of a lecture presented at the fifth S´eminaire Rhodanien de Physique “Sur les Sym´etries en Physique” held at Dolomieu (France), 17-21 March 1997. Up to the appendix and the graphics, it is to be published in Symmetries in Physics, F. Gieres, M. Kibler, C. Lucchesi and O. Piguet, eds. (Editions Fronti`eres, 1998).] 1I wish to dedicate these notes to my diploma and Ph.D. supervisors H. Reeh and R. Stora who devoted a major part of their scientific work to the understanding, description and explo- ration of symmetries in physics. Contents 1 Introduction ................................................... .......1 2 Symmetries of geometric objects ...................................2 3 Symmetries of the laws of nature ..................................5 1 Geometric (space-time) symmetries .............................6 2 Internal symmetries .............................................10 3 From global to local symmetries ...............................11 4 Combining geometric and internal symmetries ...............14
    [Show full text]