INNER PRODUCTS on N-INNER PRODUCT SPACES

Total Page:16

File Type:pdf, Size:1020Kb

INNER PRODUCTS on N-INNER PRODUCT SPACES SOOCHOW JOURNAL OF MATHEMATICS Volume 28, No. 4, pp. 389-398, October 2002 INNER PRODUCTS ON n-INNER PRODUCT SPACES BY HENDRA GUNAWAN Abstract. In this note, we show that in any n-inner product space with n ≥ 2 we can explicitly derive an inner product or, more generally, an (n − k)-inner product from the n-inner product, for each k 2 f1; : : : ; n − 1g. We also present some related results on n-normed spaces. 1. Introduction Let n be a nonnegative integer and X be a real vector space of dimension d ≥ n (d may be infinite). A real-valued function h·; ·|·; : : : ; ·i on X n+1 satisfying the following five properties: (I1) hx1; x1jx2; : : : ; xni ≥ 0; hx1; x1jx2; : : : ; xni = 0 if and only if x1; x2; : : : ; xn are linearly dependent; (I2) hx1; x1jx2; : : : ; xni = hxi1 ; xi1 jxi2 ; : : : ; xin i for every permutation (i1; : : : ; in) of (1; : : : ; n); (I3) hx; yjx2; : : : ; xni = hy; xjx2; : : : ; xni; (I4) hαx, yjx2; : : : ; xni = αhx; yjx2; : : : ; xni; α 2 R; 0 0 (I5) hx + x ; yjx2; : : : ; xni = hx; yjx2; : : : ; xni + hx ; yjx2; : : : ; xni; is called an n-inner product on X, and the pair (X; h·; ·|·; : : : ; ·i) is called an n-inner product space. For n = 1, the expression hx; yjx2; : : : ; xni is to be understood as hx; yi, which denotes nothing but an inner product on X. The concept of 2-inner product spaces was first introduced by Diminnie, G¨ahler and White [2, 3, 7] in 1970's, Received March 26, 2001; revised August 13, 2002. AMS Subject Classification. 46C50, 46B20, 46B99, 46A19. Key words. n-inner product spaces, n-normed spaces, inner product spaces. 389 390 HENDRA GUNAWAN while its generalization for n ≥ 2 was developed by Misiak [12] in 1989. Note here that our definition of n-inner products is slightly simpler than, but equivalent to, that in [12]. On an n-inner product space (X; h·; ·|·; : : : ; ·i), one may observe that the following function 1=2 kx1; x2; : : : ; xnk := hx1; x1jx2; : : : ; xni ; defines an n-norm, which enjoys the following four properties: (N1) kx1; : : : ; xnk ≥ 0; kx1; : : : ; xnk = 0 if and only if x1; : : : ; xn are linearly dependent; (N2) kx1; : : : ; xnk is invariant under permutation; (N3) kαx1; x2; : : : ; xnk = jαj kx1; x2; : : : ; xnk; α 2 R; (N4) kx + y; x2; : : : ; xnk ≤ kx; x2; : : : ; xnk + ky; x2; : : : ; xnk: Just as in an inner product space, we have the Cauchy-Schwarz inequality: jhx; yjx2; : : : ; xnij ≤ kx; x2; : : : ; xnk ky; x2; : : : ; xnk; and the equality holds if and only if x; y; x2; : : : ; xn are linearly dependent (see [9]). Furthermore, we have the polarization identity: 2 2 kx + y; x2; : : : ; xnk − kx − y; x2; : : : ; xnk = 4hx; yjx2; : : : ; xni; and the parallelogram law: 2 2 2 2 kx+y; x2; : : : ; xnk +kx−y; x2; : : : ; xnk = 2(kx; x2; : : : ; xnk +ky; x2; : : : ; xnk ): The latter gives a characterization of n-inner product spaces. By the polarization identity and the property (I2), one may observe that hx; yjx2; : : : ; xni = hx; yjxi2 ; : : : ; xin i; for every permutation (i2; : : : ; in) of (2; : : : ; n). Moreover, one can also show that hx; yjx2; : : : ; xni = 0; when x or y is a linear combination of x2; : : : ; xn, or when x2; : : : ; xn are linearly dependent. INNER PRODUCTS ON n-INNER PRODUCT SPACES 391 Now, for example, if (X; h·; ·i) is an inner product space, then the following function hx; yi hx; x2i · · · hx; xni hx ; yi hx ; x i · · · hx ; x i 2 2 2 2 n hx; yjx2; : : : ; xni := . ; . .. hx ; yi hx ; x i · · · hx ; x i n n 2 n n defines an n-inner product, called the standard (or simple) n-inner product on X. Its induced n-norm kx1; x2; : : : ; xnk represents the volume of the n-dimensional parallelepiped spanned by x1; x2; : : : ; xn. Historically, the concept of n-norms were introduced earlier by G¨ahler in order to generalize the notion of length, area and volume in a real vector space (see [4, 5, 6]). The objects studied here are n-dimensional parallelepipeds. The concept of n-inner products is thus useful when one talks about the angle between two n-dimensional parallelepipeds having the same (n − 1)-dimensional base. In this note, we shall show that in any n-inner product space with n ≥ 2 we can derive an (n − k)-inner product from the n-inner product for each k 2 f1; : : : ; n−1g. In particular, in any n-inner product space, we can derive an inner product from the n-inner product, so that one can talk about, for instance, the angle between two vectors, as one might like to. In addition, we shall present some related results on n-normed spaces. See [5] and [11] for previous results on these spaces. 2. Main Results To avoid confusion, we shall sometimes denote an n-inner product by h·; ·|·; : : : ; ·in and an n-norm by k·; : : : ; ·kn. Let (X; h·; ·|·; : : : ; ·in) be an n-inner product space with n ≥ 2. Fix a linearly independent set fa1; : : : ; ang in X. With respect to fa1; : : : ; ang, define for each n−k+1 k 2 f1; : : : ; n − 1g the function h·; ·|·; : : : ; ·in−k on X by hx; yjx2; : : : ; xn−ki := hx; yjx2; : : : ; xn−k; ai1 ; : : : ; aik i: fi1;:::;ikX}⊆{1;:::;ng Then we have the following fact: 392 HENDRA GUNAWAN Fact 2.1. For every k 2 f1; : : : ; n − 1g, the function h·; ·|·; : : : ; ·in−k defines an (n − k)-inner product on X. In particular, when k = n − 1, hx; yi := hx; yjai2 ; : : : ; ain i; (1) fi2;:::;inX}⊆{1;:::;ng defines an inner product on X. Proof. It is not hard to see that the function h·; ·|·; : : : ; ·in−k satisfies the five properties (I1){(I5) of an (n − k)-inner product, except perhaps to estab- lish the second part of (I1). To verify this property, suppose that x1; : : : ; xn−k are linearly dependent. Then hx1; x1jx2; : : : ; xn−k; ai1 ; : : : ; aik i = 0 for every fi1; : : : ; ikg ⊆ f1; : : : ; ng, and hence hx1; x1jx2; : : : ; xn−ki = 0. Conversely, sup- pose that hx1; x1jx2; : : : ; xn−ki = 0. Then hx1; x1jx2; : : : ; xn−k; ai1 ; : : : ; aik i = 0 for every fi1; : : : ; ikg ⊆ f1; : : : ; ng. Hence fx1; : : : ; xn−k; ai1 ; : : : ; aik g are linearly dependent for every fi1; : : : ; ikg ⊆ f1; : : : ; ng. By elementary linear algebra, this can only happen if x1; : : : ; xn−k are linearly dependent (or if x1 = 0 when k = n − 1). Corollary 2.2. Any n-inner product space is an (n − k)-inner product space for every k = 1; : : : ; n − 1. In particular, an n-inner product space is an inner product space. Corollary 2.3. Let k·; : : : ; ·kn be the induced n-norm on X. Then, for each k 2 f1; : : : ; n − 1g, the following function 1=2 2 kx1; : : : ; xn−kk := kx1; : : : ; xn−k; ai1 ; : : : ; aik k ; fi1;:::;ikX}⊆{1;:::;ng defines an (n − k)-norm that corresponds to h·; ·|·; : : : ; ·in−k on X. In particular, 1=2 2 kxk := kx; ai2 ; : : : ; ain k ; fi2;:::;inX}⊆{1;:::;ng defines a norm that corresponds to the derived inner product h·; ·i on X. Note that by using a derived inner product, one can develop the notion of orthogonality and the Fourier series theory in an n-inner product space, just INNER PRODUCTS ON n-INNER PRODUCT SPACES 393 like in an inner product space (see [3] and [13] for previous results in this direc- tion). With respect to the derived inner product h·; ·i defined by (1), one may observe that the set fa1; : : : ; ang is orthogonal and that kaik = ka1; : : : ; ank for every i = 1; : : : ; n (see [8]). In particular, if X is n-dimensional, then fa1; : : : ; ang forms an orthogonal basis for X and each x 2 X can be written −2 n as x = ka1; : : : ; ank i=1hx; aiiai. Unlike in [13], wePcan now have an orthogonal set of m vectors with 1 ≤ m < n. In general, by using a derived inner product, we have a more relaxed condition for orthogonality than that in [3] or [13]. Furthermore, one may also use the derived inner products and their induced norm to study the convergence of sequences of vectors in an n-inner product space. See some recent results in [10]. 2.1. Related results on n-normed spaces Suppose now that (X; k·; : : : ; ·kn) is an n-normed space and, as before, fa1; : : : ; ang is a linearly independent set in X. Then one may check that for each k 2 f1; : : : ; n − 1g 1=2 2 kx1; : : : ; xn−kk := kx1; : : : ; xn−k; ai1 ; : : : ; aik k ; (2) fi1;:::;ikX}⊆{1;:::;ng defines an (n − k)-norm on X (see [5] and [11] for similar results). In particular, the triangle inequality can be verified as follows: kx + y; x2; : : : ; xn−kk 1=2 2 = kx + y; x2; : : : ; xn−k; ai1 ; : : : ; aik k fi1;:::;ikX}⊆{1;:::;ng ≤ kx; x2; : : : ; xn−k; ai1 ; : : : ; aik k fi1;:::;ikX}⊆{1;:::;ng 2 1=2 +ky; x2; : : : ; xn−k; ai1 ; : : : ; aik k 1=2 2 ≤ kx; x2; : : : ; xn−k; ai1 ; : : : ; aik k fi1;:::;ikX}⊆{1;:::;ng 394 HENDRA GUNAWAN 1=2 2 + ky; x2; : : : ; xn−k; ai1 ; : : : ; aik k fi1;:::;ikX}⊆{1;:::;ng = kx; x2; : : : ; xn−kk + ky; x2; : : : ; xn−kk: The first inequality follows from the triangle inequality for the n-norm, while the second one follows from the triangle inequality for the l2-type norm. In general, for 1 ≤ p ≤ 1, one may observe that 1=p p kx1; : : : ; xn−kk := kx1; : : : ; xn−k; ai1 ; : : : ; aik k ; fi1;:::;ikX}⊆{1;:::;ng also defines an (n − k)-norm on X. Among these derived (n − k)-norms, however, the case p = 2 is special in the following sense.
Recommended publications
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Lecture 4: April 8, 2021 1 Orthogonality and Orthonormality
    Mathematical Toolkit Spring 2021 Lecture 4: April 8, 2021 Lecturer: Avrim Blum (notes based on notes from Madhur Tulsiani) 1 Orthogonality and orthonormality Definition 1.1 Two vectors u, v in an inner product space are said to be orthogonal if hu, vi = 0. A set of vectors S ⊆ V is said to consist of mutually orthogonal vectors if hu, vi = 0 for all u 6= v, u, v 2 S. A set of S ⊆ V is said to be orthonormal if hu, vi = 0 for all u 6= v, u, v 2 S and kuk = 1 for all u 2 S. Proposition 1.2 A set S ⊆ V n f0V g consisting of mutually orthogonal vectors is linearly inde- pendent. Proposition 1.3 (Gram-Schmidt orthogonalization) Given a finite set fv1,..., vng of linearly independent vectors, there exists a set of orthonormal vectors fw1,..., wng such that Span (fw1,..., wng) = Span (fv1,..., vng) . Proof: By induction. The case with one vector is trivial. Given the statement for k vectors and orthonormal fw1,..., wkg such that Span (fw1,..., wkg) = Span (fv1,..., vkg) , define k u + u = v − hw , v i · w and w = k 1 . k+1 k+1 ∑ i k+1 i k+1 k k i=1 uk+1 We can now check that the set fw1,..., wk+1g satisfies the required conditions. Unit length is clear, so let’s check orthogonality: k uk+1, wj = vk+1, wj − ∑ hwi, vk+1i · wi, wj = vk+1, wj − wj, vk+1 = 0. i=1 Corollary 1.4 Every finite dimensional inner product space has an orthonormal basis.
    [Show full text]
  • Orthonormality and the Gram-Schmidt Process
    Unit 4, Section 2: The Gram-Schmidt Process Orthonormality and the Gram-Schmidt Process The basis (e1; e2; : : : ; en) n n for R (or C ) is considered the standard basis for the space because of its geometric properties under the standard inner product: 1. jjeijj = 1 for all i, and 2. ei, ej are orthogonal whenever i 6= j. With this idea in mind, we record the following definitions: Definitions 6.23/6.25. Let V be an inner product space. • A list of vectors in V is called orthonormal if each vector in the list has norm 1, and if each pair of distinct vectors is orthogonal. • A basis for V is called an orthonormal basis if the basis is an orthonormal list. Remark. If a list (v1; : : : ; vn) is orthonormal, then ( 0 if i 6= j hvi; vji = 1 if i = j: Example. The list (e1; e2; : : : ; en) n n forms an orthonormal basis for R =C under the standard inner products on those spaces. 2 Example. The standard basis for Mn(C) consists of n matrices eij, 1 ≤ i; j ≤ n, where eij is the n × n matrix with a 1 in the ij entry and 0s elsewhere. Under the standard inner product on Mn(C) this is an orthonormal basis for Mn(C): 1. heij; eiji: ∗ heij; eiji = tr (eijeij) = tr (ejieij) = tr (ejj) = 1: 1 Unit 4, Section 2: The Gram-Schmidt Process 2. heij; ekli, k 6= i or j 6= l: ∗ heij; ekli = tr (ekleij) = tr (elkeij) = tr (0) if k 6= i, or tr (elj) if k = i but l 6= j = 0: So every vector in the list has norm 1, and every distinct pair of vectors is orthogonal.
    [Show full text]
  • Inner Product Spaces
    CHAPTER 6 Woman teaching geometry, from a fourteenth-century edition of Euclid’s geometry book. Inner Product Spaces In making the definition of a vector space, we generalized the linear structure (addition and scalar multiplication) of R2 and R3. We ignored other important features, such as the notions of length and angle. These ideas are embedded in the concept we now investigate, inner products. Our standing assumptions are as follows: 6.1 Notation F, V F denotes R or C. V denotes a vector space over F. LEARNING OBJECTIVES FOR THIS CHAPTER Cauchy–Schwarz Inequality Gram–Schmidt Procedure linear functionals on inner product spaces calculating minimum distance to a subspace Linear Algebra Done Right, third edition, by Sheldon Axler 164 CHAPTER 6 Inner Product Spaces 6.A Inner Products and Norms Inner Products To motivate the concept of inner prod- 2 3 x1 , x 2 uct, think of vectors in R and R as x arrows with initial point at the origin. x R2 R3 H L The length of a vector in or is called the norm of x, denoted x . 2 k k Thus for x .x1; x2/ R , we have The length of this vector x is p D2 2 2 x x1 x2 . p 2 2 x1 x2 . k k D C 3 C Similarly, if x .x1; x2; x3/ R , p 2D 2 2 2 then x x1 x2 x3 . k k D C C Even though we cannot draw pictures in higher dimensions, the gener- n n alization to R is obvious: we define the norm of x .x1; : : : ; xn/ R D 2 by p 2 2 x x1 xn : k k D C C The norm is not linear on Rn.
    [Show full text]
  • Geometric Algebra Techniques for General Relativity
    Geometric Algebra Techniques for General Relativity Matthew R. Francis∗ and Arthur Kosowsky† Dept. of Physics and Astronomy, Rutgers University 136 Frelinghuysen Road, Piscataway, NJ 08854 (Dated: February 4, 2008) Geometric (Clifford) algebra provides an efficient mathematical language for describing physical problems. We formulate general relativity in this language. The resulting formalism combines the efficiency of differential forms with the straightforwardness of coordinate methods. We focus our attention on orthonormal frames and the associated connection bivector, using them to find the Schwarzschild and Kerr solutions, along with a detailed exposition of the Petrov types for the Weyl tensor. PACS numbers: 02.40.-k; 04.20.Cv Keywords: General relativity; Clifford algebras; solution techniques I. INTRODUCTION Geometric (or Clifford) algebra provides a simple and natural language for describing geometric concepts, a point which has been argued persuasively by Hestenes [1] and Lounesto [2] among many others. Geometric algebra (GA) unifies many other mathematical formalisms describing specific aspects of geometry, including complex variables, matrix algebra, projective geometry, and differential geometry. Gravitation, which is usually viewed as a geometric theory, is a natural candidate for translation into the language of geometric algebra. This has been done for some aspects of gravitational theory; notably, Hestenes and Sobczyk have shown how geometric algebra greatly simplifies certain calculations involving the curvature tensor and provides techniques for classifying the Weyl tensor [3, 4]. Lasenby, Doran, and Gull [5] have also discussed gravitation using geometric algebra via a reformulation in terms of a gauge principle. In this paper, we formulate standard general relativity in terms of geometric algebra. A comprehensive overview like the one presented here has not previously appeared in the literature, although unpublished works of Hestenes and of Doran take significant steps in this direction.
    [Show full text]
  • Inner Product Spaces Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (March 2, 2007)
    MAT067 University of California, Davis Winter 2007 Inner Product Spaces Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (March 2, 2007) The abstract definition of vector spaces only takes into account algebraic properties for the addition and scalar multiplication of vectors. For vectors in Rn, for example, we also have geometric intuition which involves the length of vectors or angles between vectors. In this section we discuss inner product spaces, which are vector spaces with an inner product defined on them, which allow us to introduce the notion of length (or norm) of vectors and concepts such as orthogonality. 1 Inner product In this section V is a finite-dimensional, nonzero vector space over F. Definition 1. An inner product on V is a map ·, · : V × V → F (u, v) →u, v with the following properties: 1. Linearity in first slot: u + v, w = u, w + v, w for all u, v, w ∈ V and au, v = au, v; 2. Positivity: v, v≥0 for all v ∈ V ; 3. Positive definiteness: v, v =0ifandonlyifv =0; 4. Conjugate symmetry: u, v = v, u for all u, v ∈ V . Remark 1. Recall that every real number x ∈ R equals its complex conjugate. Hence for real vector spaces the condition about conjugate symmetry becomes symmetry. Definition 2. An inner product space is a vector space over F together with an inner product ·, ·. Copyright c 2007 by the authors. These lecture notes may be reproduced in their entirety for non- commercial purposes. 2NORMS 2 Example 1. V = Fn n u =(u1,...,un),v =(v1,...,vn) ∈ F Then n u, v = uivi.
    [Show full text]
  • In Defense of Orthonormality Constraints for Nonrigid Structure from Motion
    In Defense of Orthonormality Constraints for Nonrigid Structure from Motion Ijaz Akhter1 Yaser Sheikh2 Sohaib Khan1 [email protected] [email protected] [email protected] 1LUMS School of Science and Engineering, Lahore, Pakistan 2The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA http://cvlab.lums.edu.pk Abstract using these constraints. The ambiguity theorem introduced by Xiao et al. has In factorization approaches to nonrigid structure from greatly influenced research on NRSFM. They proposed ad- motion, the 3D shape of a deforming object is usually mod- ditional constraints termed basis constraints to resolve this eled as a linear combination of a small number of basis ambiguity to get a unique linear solution. In addition, sev- shapes. The original approach to simultaneously estimate eral new constraints have also been proposed. For exam- the shape basis and nonrigid structure exploited orthonor- ple, Torresani et al. [8] proposed a Gaussian prior for the mality constraints for metric rectification. Recently, it has shape coefficients. Del Bue et al. [5] assumed that nonrigid been asserted that structure recovery through orthonormal- shape contained a significant number of points which be- ity constraints alone is inherently ambiguous and cannot re- have in a rigid fashion. Recently, Bartoli et al. [2] proposed sult in a unique solution. This assertion has been accepted that we can iteratively find the shape basis based on the as- as conventional wisdom and is the justification of many re- sumption that each basis shape should decrease the repro- medial heuristics in literature. Our key contribution is to jection error left due to the earlier basis.
    [Show full text]
  • Novel Orthogonalization and Biorthogonalization Algorithms 3
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Repository of the Academy's Library Theoretical Chemistry Accounts manuscript No. (will be inserted by the editor) Novel orthogonalization and biorthogonalization algorithms Towards multistate multiconfiguration perturbation theory Zsuzsanna T´oth · P´eterR. Nagy · P´eter Jeszenszki · Agnes´ Szabados Received: xxx / Accepted: xxx Abstract Orthogonalization with the prerequisite of keeping several vectors fixed is examined. Explicit formulae are derived both for orthogonal and biorthogonal vector sets. Calculation of the inverse or square root of the entire overlap matrix is eliminated, allowing computational time reduction. In this T´othZsuzsanna - P´eterJeszenszki - Agnes´ Szabados Laboratory of Theoretical Chemistry E¨otv¨osUniversity, H-1518 Budapest, POB 32, Hungary Tel.: +36-1-3722500 Fax: +36-1-3722909 E-mail: [email protected] P´eterR. Nagy MTA-BME Lend¨uletQuantum Chemistry Research Group Department of Physical Chemistry and Materials Science, Budapest University of Technology and Economics, H-1521 Budapest, POB 91, Hungary 2 Zsuzsanna T´othet al. special situation it is found sufficient to evaluate functions of matrices of the dimension matching the number of fixed vectors. The (bi)orthogonal sets find direct application in extending multiconfigu- rational perturbation theory to deal with multiple reference vectors. Keywords overlap · orthogonalisation · biorthogonal sets · multiconfiguration perturbation theory · multistate theory 1 Introduction There are two different, equivalent approaches for treating nonorthogonality of a nonredundant vector set in a linear algebraic problem. One way is creating biorthogonal vectors to the overlapping set, the other more common way is { } i N orthogonalising the basis set.
    [Show full text]
  • Orthogonality and Superposition
    Quantum Mechanics and Atomic Physics Lecture 10: OrthogonalityOrthogonality,, Superposition, Time-Time-dependentdependent wave functions, etc. http://www.physics.rutgers.edu/ugrad/361 Prof. Sean Oh Last time: The Uncertainty Principle Revisited Heisenberg’s Uncertainty principle: ΔxΔp ≥ h/2 Position and momentum do not commute If we measure the particle’s position more and more precisely, that comes with the expense of the particle’s momentum bildlllkbecoming less and less well known. And vicevice--versa.versa. Ehrenfest’s Theorem The expectation value of quantum mechanics follows the equation of motion of classical mechanics. In classical mechanics In quantum mechanics, See Reed 4.5 for the proof. Average of many particles behaves like a classical particle Orthogonality Theorem: Eiggfenfunctions with different eigenvalues are orthogonal. Consider a set of wavefunctionssatisfying the time independent S.E. for some potential V(x) Then orthogonality states: In other words, if any two members of the set obey the above integral constraint, they constitute an orthogonal set of wavefunctionsavefunctions.. Let’s prove this… Proof: Orthogonality Theorem Proof, con ’t Theorem is proven Orthonormality In addition, if each individual member of the set of wavefunctions is normalized,,y they constitute an orthonormal set: Kronecker delta Degenerate Eigenfunctions If n ≠ kbutEk, but En= Ek, then we say that the eigenfunctions are degenerate Since E n-Ek =0= 0, the theintegral integral need not be zero But it turns out that we can always obtain another set of Ψ’s, linear combinations of the originals, such that the new Ψ’s are orthogonal. Principle of Superposition Any linear combination of solutions to the timetime--dependentdependent S.E.
    [Show full text]
  • Complex Inner Product Adjoint of a Matrix Orthogonality And
    EECS 16B Summer 2020 Discussion 3D: Self-Adjoint Matrices Complex Inner Product The complex inner product ; on a vector space V over C is a function that takes in two vectors and outputs a scalar,h· such·i that ; is symmetric, linear, and positive-definite. h· ·i • Conjugate Symmetry: u; v v; u h® ®i h® ®i • Scaling: cu; v c u; v and u; cv c u; v h ® ®i h® ®i h® ®i h® ®i • Additivity: u + v; w u; w + v; w and u; v + w u; v + u; w h® ® ®i h® ®i h® ®i h® ® ®i h® ®i h® ®i • Positive-definite: u; u 0 with u; u 0 if and only if u 0 h® ®i ≥ h® ®i ® ® n For two vectors, u; v C , we usually define their inner product u; v to be u; v v∗u. ® ® 2 ph® ®i h® ®i ® ® We define the norm, or the magnitude, of a vector v to be v v; v pv v. For any ® ® ® ® ®∗ ® non-zero vector, we can normalize, i.e., set its magnitude to 1k whilek preservingh i its direction, v by dividing the vector by its norm v® . k ®k Adjoint of a Matrix The adjoint or conjugate-transpose of a matrix A is the matrix A∗ such that A∗ij Aji. From the complex inner product, one can show that Ax; y x; A∗ y (1) h ® ®i h® ®i A matrix is self-adjoint or Hermitian if A A∗: Orthogonality and Orthonormality We know that the angle between two vectors is given by this equation u; v u v cos θ.
    [Show full text]
  • Neutrosophic Bilinear Algebras and Their Generalizations
    W. B. Vasantha Kandasamy Florentin Smarandache NEUTROSOPHIC BILINEAR ALGEBRAS AND THEIR GENERALIZATIONS Svenska fysikarkivet Stockholm, Sweden 2010 Svenska fysikarkivet (Swedish physics archive) is a publisher registered with the Royal National Library of Sweden (Kungliga biblioteket), Stockholm. Postal address for correspondence: Näsbydalsvägen 4/11, 18331 Täby, Sweden Peer-Reviewers: Prof. Mihàly Bencze, Department of Mathematics Áprily Lajos College, Braşov, Romania Prof. Ion Patrascu, Department of Mathematics, Fratii, Buzesti College Craiova, Romania Dr. Fu Yuhua, 13-603, Liufangbeili Liufang Street, Chaoyang district, Beijing, 100028 P. R. China Copyright © Authors, 2010 Copyright © Publication by Svenska fysikarkivet, 2010 Copyright note: Electronic copying, print copying and distribution of this book for non-commercial, academic or individual use can be made by any user without permission or charge. Any part of this book being cited or used howsoever in other publications must acknowledge this publication. No part of this book may be reproduced in any form whatsoever (including storage in any media) for commercial use without the prior permission of the copyright holder. Requests for permission to reproduce any part of this book for commercial use must be addressed to the Author. The Author retains his rights to use this book as a whole or any part of it in any other publications and in any way he sees fit. This Copyright Agreement shall remain valid even if the Author transfers copyright of the book to another party. ISBN: 978-91-85917-14-3
    [Show full text]
  • Haar Transform 0.1. Haar Wavelets on R. Definition 0.1. Let J, K ∈ Z. Define
    Haar transform 0.1. Haar wavelets on R. Definition 0.1. Let j, k Z. Define the interval I to be ∈ j,k −j −j Ij,k = [2 k, 2 (k + 1)). We say that (j, k) =(j′, k′) if either j = j′ or k = k′. 6 6 6 Lemma 0.2. Let j, j′,k,k′ Z be such that (j, k) = (j′, k′). Then, one of the ∈ 6 following is true: (i) I I ′ ′ = , or j,k ∩ j ,k ∅ (ii) I I ′ ′ , or j,k ⊂ j ,k (iii) I ′ ′ I . j ,k ⊂ j,k Given the integers j, k Z and the interval I , we define Il (resp., Ir ) to be the ∈ j,k j,k j,k left half of Ij,k (resp., the right half of Ij,k). Thus, l −j −j −j−1 Ij,k = [2 k, 2 k + 2 ), r −j −j−1 −j Ij,k = [2 k + 2 , 2 (k + 1)). It is not difficult to see that: l r Ij,k = Ij+1,2k and Ij,k = Ij+1,2k+1. With this observation in mind, we have the following lemma. ′ ′ ′ ′ Lemma 0.3. Let j, j ,k,k Z be such that (j, k) =(j , k ). If I I ′ ′ then ∈ 6 j,k ⊂ j ,k l (i) either I I ′ ′ j,k ⊂ j ,k r (ii) or I I ′ ′ . j,k ⊂ j ,k Definition 0.4. The system of Haar scaling functions is defined as follows: Let χ[0,1) be the characteristic function of interval [0, 1), i.e., 1 x [0, 1), χ (x)= ∈ [0,1) 0 otherwise.
    [Show full text]