MATH 235: Inner Product Spaces, SOLUTIONS to Assign. 7

Total Page:16

File Type:pdf, Size:1020Kb

MATH 235: Inner Product Spaces, SOLUTIONS to Assign. 7 MATH 235: Inner Product Spaces, SOLUTIONS to Assign. 7 Questions handed in: 3,4,5,6,9,10. Contents 1 Orthogonal Basis for Inner Product Space 2 2 Inner-Product Function Space 2 3 Weighted Inner Product in R2 *** 2 3.1 SOLUTIONS ....................................... 2 4 Inner Product in 3 *** 3 S 4.1 SOLUTIONS ....................................... 3 5 Inner Products in 22 and P2 *** 4 M 5.1 SOLUTIONS ....................................... 5 6 Cauchy-Schwarz and Triangle Inequalities *** 5 6.1 SOLUTIONS ....................................... 5 7 Length in Function Space 6 8 More on Function Space 7 9 Linear Transformations and Inner Products *** 8 9.1 SOLUTIONS ....................................... 8 10 MATLAB *** 9 10.1SOLUTIONS ...................................... 12 10.1.1 MATLAB Program for Solution . ..... 12 10.1.2 Output of MATLAB Program for Solution . ...... 13 1 1 Orthogonal Basis for Inner Product Space 1 If V = P3 with the inner product <f,g >= 1 f(x)g(x)dx, apply the Gram-Schmidt algorithm to obtain an orthogonal basis from B = 1,x,x−2,x3 . { R } 2 Inner-Product Function Space Consider the vector space C[0, 1] of all continuously differentiable functions defined on the closed 1 interval [0, 1]. The inner product in C[0, 1] is defined by <f,g >= 0 f(x)g(x)dx. Find the inner products of the following pairs of functions and state whether they are orthogonal R 1. f(x) = cos(2πx) and g(x) = sin(2πx) 2. f(x)= x and g(x) = ex 3. f(x)= x and g(x) = 3x 3 Weighted Inner Product in R2 *** Show whether or not the following are valid inner products in R2. If one is a valid inner product, T then find a nonzero vector that is orthogonal to the vector y = 2 1 . 1. u, v := 7u1v1 + 1.2u2v2. h i 2. u, v := 7u1v1 1.2u2v2. h i − − 3. u, v := 7u1v1 1.2u2v2. h i − 3.1 SOLUTIONS BEGIN SOLUTION: Note that in each case, the inner product can be written as u, v = h i uT Dv, for an appropriate diagonal matrix D. We see that u, v = uT Dv = (uT Dv)T = vT Du = h i v, u . And, αu, v + w = α(uT D(v + w)) = α( u, v + u, w ). Therefore, the first three properties h i h i h i h i for an inner product all hold true. 7 0 1. For u, v := 7u1v1 + 1.2u2v2, the diagonal matrix D = . Therefore, u, u := h i 0 1.2 h i 7u2 +1.2u2 0, with equality if and only if the vector u = 0, i.e. this is a valid innerproduct. 1 2 ≥ A vector orthogonal to the given y satisfies u, y := 7u1(2) + 1.2u2 = 0, e.g. we could set 1 h i u1 = 1 and get that u = − . (To understand the geometry, we could draw the ellipse − 14 1.2 u = 1 and see which vectors are orthogonal to each other.) k k 7 0 2 2. For u, v := 7u1v1 1.2u2v2, the matrix D = − . Therefore, u, u := 7u h i − − 0 1.2 h i − 1 − − 1.2u2 < 0, if the vector u = 0, i.e. this is not a valid innerproduct. 2 6 2 7 0 2 2 3. For u, v := 7u1v1 1.2u2v2, the matrix D = . Therefore, u, u := 7u 1.2u < h i − 0 1.2 h i 1 − 2 2 − 0, if the vector u = 0, and u2 < 1.2u2 , i.e. this is not a valid innerproduct. 6 1 7 END SOLUTION. 4 Inner Product in 3 *** S Let 3 denote the vector space of 3 3 symmetric matrices. S × 1. Prove that S, T = trST is an inner product on 3. h i S 1 0 1 1 0 1 3 2. Let S1 = 0 1 0 and S2 = 0 3 0 . Apply the Gram-Schmidt process in to − S 1 0 0 1 0 0 find a matrix orthogonal to both S1 and S2. 3. Find an orthonormal basis for 3 using the above three matrices. S 1 0 1 4. Let T = 0 1 1 . Find the projection of T on the span of S1,S2 . − { } 1 1 1 − 4.1 SOLUTIONS BEGIN SOLUTION: 1. (a) That tr ST = tr TS was proved in class already. (b) tr S(T+V) = tr (ST+SV) = tr ST+tr SV, where the last equality follows straight from the definition of taking the sum of the diagonal elements. (c) tr αS= αtr S, again follows from the definition, i.e. both are equal to i(αSii). (d) S,S = trSS = trS2 = S2 0 and it is = 0 if an only if each S2 = 0, i.e. if and h i ij ij ≥ ijP only if S = 0. P 2. We first note that tr S1S2 =1+1+1 3 = 0, i.e. S1 S2. Therefore, we can apply one − ⊥ step of the process. But we need a matrix that is linearly independent to both S1,S2. By observation the matrix 0 0 0 S3 = 0 0 0 0 0 1 2d a d − works and in fact is orthogonal to both. Note that any matrix of the form a 0 b d b c does the job. Also, note we could also choose a random matrix (linearly independent with 3 1 1 0 S1,S2) and then apply one step of the process, e.g. choose R = 1 1 0 . Then we can 0 0 0 find the third matrix by subtracting the projection onto the span of the first two, i.e. R,S1 R,S2 ˆ h i h i S3 = R S1,S1 S1 S2,S2 S2 − 2h i 2− h i = R 4 S1 −12 S2 − 1 − 1 = R S1 + S2 − 2 6 1 1 0 1 0 1 1 0 1 = 1 1 0 1 0 1 0 + 1 0 3 0 − 2 6 − 0 0 0 1 0 0 1 0 0 2/3 1 1/3 − = 1 0 0 1/3 0 0 − 3. An orthonormal basis using S3 is 1 1 S1, S2,S3 2 2√3 Alternatively, using Sˆ3, we get 1 1 2√5 S1, S2, Sˆ3 (2 2√3 3 ) 4. The projection of T is 1 1 1 1 ( trTS1)S1 + ( trTS2)S2 = ( 4)S1 + ( 0)S2 = 2S1. 2 2√3 2 2√3 END SOLUTION. 5 Inner Products in 22 and P2 *** M Show which of the following are valid inner products. a a b b 1. Let A = 1 2 and B = 1 2 be real matrices. a a b b 3 4 3 4 (a) Define A, B := a1b1 + 2a2b3 + 3a3b2 + a4b4. h i (b) Define A, B := tr BTA. h i 2. Let p(x) and q(x) be polynomials in P2. (a) Define p,q := p( 1)q( 1) + p( 1 )q( 1 )+ p( 3)q( 3). h i − − 2 2 − − (b) Define p,q := p( 3)q( 3) + p( 1 )q( 1 )+ p( 1)q( 1). h i − − 2 2 − − 4 5.1 SOLUTIONS BEGIN SOLUTION: 0 1 1. (a) Note that A, B := a1b1 + 2a2b3 + 3a3b2 + a4b4 = tr AB. Let A = Then h i 0 0 0 0 A, A = tr AA = tr = 0, i.e. A = 0, but A = 0. Therefore, this is not a h i 0 0 k k 6 valid inner product. A(:, 1) (b) For any m n matrix A, define vec (A) = , i.e. vec (A) is the vector formed × ...A(:,n) T T from A using the columns of A. Note that tr B A= ij BijAij = vec (B) vec (A). We can now show that this is a valid inner product in the usual way that the standard dot P product is an inner product. 2. (a) This follows as does the Example 2 in the text on page 429. (b) Changing the order of the points does not change the verification of any of the rules for verifying this is an inner product. END SOLUTION. 6 Cauchy-Schwarz and Triangle Inequalities *** 3 0 0 1. Let D = 0 π 0 . Consider the inner product defined on R3 by: u, v = uT Dv. Let h i 0 0 2 2 4 − − u = 3 and v = 8 . Verify that both inequalities hold. 7 9 − 2. Find the projection of v on the span of u and call it w. Verify that equality holds for the Cauchy-Schwarz inequality applied to u, w. 6.1 SOLUTIONS BEGIN SOLUTION: Following are the MATLAB lines and the output for the solution. D=sym(diag([3 pi 2])); u=[-2 3 -7]’; v=[-4 8 9]’; lhs=abs(double(u’*D*v)); rhs= sqrt(double(u’*D*u))*sqrt(double(v’*D*v)) rhs = 238.4100 disp(’verify C.S. inequality, i.e. that rhs - lhs is nonnegative’) verify C.S. inequality, i.e. that rhs - lhs is nonnegative 5 rhs-lhs ans = 211.8082 lhs= (double((u+v)’*D*(u+v))); rhs= (double((u)’*D*(u))) + (double((v)’*D*(v))); disp(’verify triangle inequality, i.e. that rhs - lhs is nonnegative’) verify triangle inequality, i.e. that rhs - lhs is nonnegative rhs-lhs ans = 53.2036 disp(’find w, the projection on the span of u’) find w, the projection on the span of u w= (double(v’*D*u))/(double(u’*D*u))*u w = 0.3848 -0.5772 1.3467 lhs=abs(double(u’*D*w)); rhs= sqrt(double(u’*D*u))*sqrt(double(w’*D*w)) rhs = 26.6018 disp(’verify that rhs - lhs is now zero, i.e.
Recommended publications
  • ADDITIVE MAPS on RANK K BIVECTORS 1. Introduction. Let N 2 Be an Integer and Let F Be a Field. We Denote by M N(F) the Algeb
    Electronic Journal of Linear Algebra, ISSN 1081-3810 A publication of the International Linear Algebra Society Volume 36, pp. 847-856, December 2020. ADDITIVE MAPS ON RANK K BIVECTORS∗ WAI LEONG CHOOIy AND KIAM HEONG KWAy V2 Abstract. Let U and V be linear spaces over fields F and K, respectively, such that dim U = n > 2 and jFj > 3. Let U n−1 V2 V2 be the second exterior power of U. Fixing an even integer k satisfying 2 6 k 6 n, it is shown that a map : U! V satisfies (u + v) = (u) + (v) for all rank k bivectors u; v 2 V2 U if and only if is an additive map. Examples showing the indispensability of the assumption on k are given. Key words. Additive maps, Second exterior powers, Bivectors, Ranks, Alternate matrices. AMS subject classifications. 15A03, 15A04, 15A75, 15A86. 1. Introduction. Let n > 2 be an integer and let F be a field. We denote by Mn(F) the algebra of n×n matrices over F. Given a nonempty subset S of Mn(F), a map : Mn(F) ! Mn(F) is called commuting on S (respectively, additive on S) if (A)A = A (A) for all A 2 S (respectively, (A + B) = (A) + (B) for all A; B 2 S). In 2012, using Breˇsar'sresult [1, Theorem A], Franca [3] characterized commuting additive maps on invertible (respectively, singular) matrices of Mn(F). He continued to characterize commuting additive maps on rank k matrices of Mn(F) in [4], where 2 6 k 6 n is a fixed integer, and commuting additive maps on rank one matrices of Mn(F) in [5].
    [Show full text]
  • Introduction to Linear Bialgebra
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of New Mexico University of New Mexico UNM Digital Repository Mathematics and Statistics Faculty and Staff Publications Academic Department Resources 2005 INTRODUCTION TO LINEAR BIALGEBRA Florentin Smarandache University of New Mexico, [email protected] W.B. Vasantha Kandasamy K. Ilanthenral Follow this and additional works at: https://digitalrepository.unm.edu/math_fsp Part of the Algebra Commons, Analysis Commons, Discrete Mathematics and Combinatorics Commons, and the Other Mathematics Commons Recommended Citation Smarandache, Florentin; W.B. Vasantha Kandasamy; and K. Ilanthenral. "INTRODUCTION TO LINEAR BIALGEBRA." (2005). https://digitalrepository.unm.edu/math_fsp/232 This Book is brought to you for free and open access by the Academic Department Resources at UNM Digital Repository. It has been accepted for inclusion in Mathematics and Statistics Faculty and Staff Publications by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected], [email protected], [email protected]. INTRODUCTION TO LINEAR BIALGEBRA W. B. Vasantha Kandasamy Department of Mathematics Indian Institute of Technology, Madras Chennai – 600036, India e-mail: [email protected] web: http://mat.iitm.ac.in/~wbv Florentin Smarandache Department of Mathematics University of New Mexico Gallup, NM 87301, USA e-mail: [email protected] K. Ilanthenral Editor, Maths Tiger, Quarterly Journal Flat No.11, Mayura Park, 16, Kazhikundram Main Road, Tharamani, Chennai – 600 113, India e-mail: [email protected] HEXIS Phoenix, Arizona 2005 1 This book can be ordered in a paper bound reprint from: Books on Demand ProQuest Information & Learning (University of Microfilm International) 300 N.
    [Show full text]
  • Do Killingâ•Fiyano Tensors Form a Lie Algebra?
    University of Massachusetts Amherst ScholarWorks@UMass Amherst Physics Department Faculty Publication Series Physics 2007 Do Killing–Yano tensors form a Lie algebra? David Kastor University of Massachusetts - Amherst, [email protected] Sourya Ray University of Massachusetts - Amherst Jennie Traschen University of Massachusetts - Amherst, [email protected] Follow this and additional works at: https://scholarworks.umass.edu/physics_faculty_pubs Part of the Physics Commons Recommended Citation Kastor, David; Ray, Sourya; and Traschen, Jennie, "Do Killing–Yano tensors form a Lie algebra?" (2007). Classical and Quantum Gravity. 1235. Retrieved from https://scholarworks.umass.edu/physics_faculty_pubs/1235 This Article is brought to you for free and open access by the Physics at ScholarWorks@UMass Amherst. It has been accepted for inclusion in Physics Department Faculty Publication Series by an authorized administrator of ScholarWorks@UMass Amherst. For more information, please contact [email protected]. Do Killing-Yano tensors form a Lie algebra? David Kastor, Sourya Ray and Jennie Traschen Department of Physics University of Massachusetts Amherst, MA 01003 ABSTRACT Killing-Yano tensors are natural generalizations of Killing vec- tors. We investigate whether Killing-Yano tensors form a graded Lie algebra with respect to the Schouten-Nijenhuis bracket. We find that this proposition does not hold in general, but that it arXiv:0705.0535v1 [hep-th] 3 May 2007 does hold for constant curvature spacetimes. We also show that Minkowski and (anti)-deSitter spacetimes have the maximal num- ber of Killing-Yano tensors of each rank and that the algebras of these tensors under the SN bracket are relatively simple exten- sions of the Poincare and (A)dS symmetry algebras.
    [Show full text]
  • Preserivng Pieces of Information in a Given Order in HRR and GA$ C$
    Proceedings of the Federated Conference on ISBN 978-83-60810-22-4 Computer Science and Information Systems pp. 213–220 Preserivng pieces of information in a given order in HRR and GAc Agnieszka Patyk-Ło´nska Abstract—Geometric Analogues of Holographic Reduced Rep- Secondly, this method of encoding will not detect similarity resentations (GA HRR or GAc—the continuous version of between eye and yeye. discrete GA described in [16]) employ role-filler binding based A quantum-like attempt to tackle the problem of information on geometric products. Atomic objects are real-valued vectors in n-dimensional Euclidean space and complex statements belong to ordering was made in [1]—a version of semantic analysis, a hierarchy of multivectors. A property of GAc and HRR studied reformulated in terms of a Hilbert-space problem, is compared here is the ability to store pieces of information in a given order with structures known from quantum mechanics. In particular, by means of trajectory association. We describe results of an an LSA matrix representation [1], [10] is rewritten by the experiment: finding the alignment of items in a sequence without means of quantum notation. Geometric algebra has also been the precise knowledge of trajectory vectors. Index Terms—distributed representations, geometric algebra, used extensively in quantum mechanics ([2], [4], [3]) and so HRR, BSC, word order, trajectory associations, bag of words. there seems to be a natural connection between LSA and GAc, which is the ground for fututre work on the problem I. INTRODUCTION of preserving pieces of information in a given order. VER the years several attempts have been made to As far as convolutions are concerned, the most interesting O preserve the order in which the objects are to be remem- approach to remembering information in a given order has bered with the help of binding and superposition.
    [Show full text]
  • 8 Rank of a Matrix
    8 Rank of a matrix We already know how to figure out that the collection (v1;:::; vk) is linearly dependent or not if each n vj 2 R . Recall that we need to form the matrix [v1 j ::: j vk] with the given vectors as columns and see whether the row echelon form of this matrix has any free variables. If these vectors linearly independent (this is an abuse of language, the correct phrase would be \if the collection composed of these vectors is linearly independent"), then, due to the theorems we proved, their span is a subspace of Rn of dimension k, and this collection is a basis of this subspace. Now, what if they are linearly dependent? Still, their span will be a subspace of Rn, but what is the dimension and what is a basis? Naively, I can answer this question by looking at these vectors one by one. In particular, if v1 =6 0 then I form B = (v1) (I know that one nonzero vector is linearly independent). Next, I add v2 to this collection. If the collection (v1; v2) is linearly dependent (which I know how to check), I drop v2 and take v3. If independent then I form B = (v1; v2) and add now third vector v3. This procedure will lead to the vectors that form a basis of the span of my original collection, and their number will be the dimension. Can I do it in a different, more efficient way? The answer if \yes." As a side result we'll get one of the most important facts of the basic linear algebra.
    [Show full text]
  • 2.2 Kernel and Range of a Linear Transformation
    2.2 Kernel and Range of a Linear Transformation Performance Criteria: 2. (c) Determine whether a given vector is in the kernel or range of a linear trans- formation. Describe the kernel and range of a linear transformation. (d) Determine whether a transformation is one-to-one; determine whether a transformation is onto. When working with transformations T : Rm → Rn in Math 341, you found that any linear transformation can be represented by multiplication by a matrix. At some point after that you were introduced to the concepts of the null space and column space of a matrix. In this section we present the analogous ideas for general vector spaces. Definition 2.4: Let V and W be vector spaces, and let T : V → W be a transformation. We will call V the domain of T , and W is the codomain of T . Definition 2.5: Let V and W be vector spaces, and let T : V → W be a linear transformation. • The set of all vectors v ∈ V for which T v = 0 is a subspace of V . It is called the kernel of T , And we will denote it by ker(T ). • The set of all vectors w ∈ W such that w = T v for some v ∈ V is called the range of T . It is a subspace of W , and is denoted ran(T ). It is worth making a few comments about the above: • The kernel and range “belong to” the transformation, not the vector spaces V and W . If we had another linear transformation S : V → W , it would most likely have a different kernel and range.
    [Show full text]
  • 23. Kernel, Rank, Range
    23. Kernel, Rank, Range We now study linear transformations in more detail. First, we establish some important vocabulary. The range of a linear transformation f : V ! W is the set of vectors the linear transformation maps to. This set is also often called the image of f, written ran(f) = Im(f) = L(V ) = fL(v)jv 2 V g ⊂ W: The domain of a linear transformation is often called the pre-image of f. We can also talk about the pre-image of any subset of vectors U 2 W : L−1(U) = fv 2 V jL(v) 2 Ug ⊂ V: A linear transformation f is one-to-one if for any x 6= y 2 V , f(x) 6= f(y). In other words, different vector in V always map to different vectors in W . One-to-one transformations are also known as injective transformations. Notice that injectivity is a condition on the pre-image of f. A linear transformation f is onto if for every w 2 W , there exists an x 2 V such that f(x) = w. In other words, every vector in W is the image of some vector in V . An onto transformation is also known as an surjective transformation. Notice that surjectivity is a condition on the image of f. 1 Suppose L : V ! W is not injective. Then we can find v1 6= v2 such that Lv1 = Lv2. Then v1 − v2 6= 0, but L(v1 − v2) = 0: Definition Let L : V ! W be a linear transformation. The set of all vectors v such that Lv = 0W is called the kernel of L: ker L = fv 2 V jLv = 0g: 1 The notions of one-to-one and onto can be generalized to arbitrary functions on sets.
    [Show full text]
  • Introduction to Clifford's Geometric Algebra
    解説:特集 コンピューテーショナル・インテリジェンスの新展開 —クリフォード代数表現など高次元表現を中心として— Introduction to Clifford’s Geometric Algebra Eckhard HITZER* *Department of Applied Physics, University of Fukui, Fukui, Japan *E-mail: [email protected] Key Words:hypercomplex algebra, hypercomplex analysis, geometry, science, engineering. JL 0004/12/5104–0338 C 2012 SICE erty 15), 21) that any isometry4 from the vector space V Abstract into an inner-product algebra5 A over the field6 K can Geometric algebra was initiated by W.K. Clifford over be uniquely extended to an isometry7 from the Clifford 130 years ago. It unifies all branches of physics, and algebra Cl(V )intoA. The Clifford algebra Cl(V )is has found rich applications in robotics, signal process- the unique associative and multilinear algebra with this ing, ray tracing, virtual reality, computer vision, vec- property. Thus if we wish to generalize methods from tor field processing, tracking, geographic information algebra, analysis, calculus, differential geometry (etc.) systems and neural computing. This tutorial explains of real numbers, complex numbers, and quaternion al- the basics of geometric algebra, with concrete exam- gebra to vector spaces and multivector spaces (which ples of the plane, of 3D space, of spacetime, and the include additional elements representing 2D up to nD popular conformal model. Geometric algebras are ideal subspaces, i.e. plane elements up to hypervolume ele- to represent geometric transformations in the general ments), the study of Clifford algebras becomes unavoid- framework of Clifford groups (also called versor or Lip- able. Indeed repeatedly and independently a long list of schitz groups). Geometric (algebra based) calculus al- Clifford algebras, their subalgebras and in Clifford al- lows e.g., to optimize learning algorithms of Clifford gebras embedded algebras (like octonions 17))ofmany neurons, etc.
    [Show full text]
  • A Bit About Hilbert Spaces
    A Bit About Hilbert Spaces David Rosenberg New York University October 29, 2016 David Rosenberg (New York University ) DS-GA 1003 October 29, 2016 1 / 9 Inner Product Space (or “Pre-Hilbert” Spaces) An inner product space (over reals) is a vector space V and an inner product, which is a mapping h·,·i : V × V ! R that has the following properties 8x,y,z 2 V and a,b 2 R: Symmetry: hx,yi = hy,xi Linearity: hax + by,zi = ahx,zi + b hy,zi Postive-definiteness: hx,xi > 0 and hx,xi = 0 () x = 0. David Rosenberg (New York University ) DS-GA 1003 October 29, 2016 2 / 9 Norm from Inner Product For an inner product space, we define a norm as p kxk = hx,xi. Example Rd with standard Euclidean inner product is an inner product space: hx,yi := xT y 8x,y 2 Rd . Norm is p kxk = xT y. David Rosenberg (New York University ) DS-GA 1003 October 29, 2016 3 / 9 What norms can we get from an inner product? Theorem (Parallelogram Law) A norm kvk can be generated by an inner product on V iff 8x,y 2 V 2kxk2 + 2kyk2 = kx + yk2 + kx - yk2, and if it can, the inner product is given by the polarization identity jjxjj2 + jjyjj2 - jjx - yjj2 hx,yi = . 2 Example d `1 norm on R is NOT generated by an inner product. [Exercise] d Is `2 norm on R generated by an inner product? David Rosenberg (New York University ) DS-GA 1003 October 29, 2016 4 / 9 Pythagorean Theroem Definition Two vectors are orthogonal if hx,yi = 0.
    [Show full text]
  • True False Questions from 4,5,6
    (c)2015 UM Math Dept licensed under a Creative Commons By-NC-SA 4.0 International License. Math 217: True False Practice Professor Karen Smith 1. For every 2 × 2 matrix A, there exists a 2 × 2 matrix B such that det(A + B) 6= det A + det B. Solution note: False! Take A to be the zero matrix for a counterexample. 2. If the change of basis matrix SA!B = ~e4 ~e3 ~e2 ~e1 , then the elements of A are the same as the element of B, but in a different order. Solution note: True. The matrix tells us that the first element of A is the fourth element of B, the second element of basis A is the third element of B, the third element of basis A is the second element of B, and the the fourth element of basis A is the first element of B. 6 6 3. Every linear transformation R ! R is an isomorphism. Solution note: False. The zero map is a a counter example. 4. If matrix B is obtained from A by swapping two rows and det A < det B, then A is invertible. Solution note: True: we only need to know det A 6= 0. But if det A = 0, then det B = − det A = 0, so det A < det B does not hold. 5. If two n × n matrices have the same determinant, they are similar. Solution note: False: The only matrix similar to the zero matrix is itself. But many 1 0 matrices besides the zero matrix have determinant zero, such as : 0 0 6.
    [Show full text]
  • The Determinant Inner Product and the Heisenberg Product of Sym(2)
    INTERNATIONAL ELECTRONIC JOURNAL OF GEOMETRY VOLUME 14 NO. 1 PAGE 1–12 (2021) DOI: HTTPS://DOI.ORG/10.32323/IEJGEO.602178 The determinant inner product and the Heisenberg product of Sym(2) Mircea Crasmareanu (Dedicated to the memory of Prof. Dr. Aurel BEJANCU (1946 - 2020)Cihan Ozgur) ABSTRACT The aim of this work is to introduce and study the nondegenerate inner product < ·; · >det induced by the determinant map on the space Sym(2) of symmetric 2 × 2 real matrices. This symmetric bilinear form of index 2 defines a rational symmetric function on the pairs of rays in the plane and an associated function on the 2-torus can be expressed with the usual Hopf bundle projection 3 2 1 S ! S ( 2 ). Also, the product < ·; · >det is treated with complex numbers by using the Hopf invariant map of Sym(2) and this complex approach yields a Heisenberg product on Sym(2). Moreover, the quadratic equation of critical points for a rational Morse function of height type generates a cosymplectic structure on Sym(2) with the unitary matrix as associated Reeb vector and with the Reeb 1-form being half of the trace map. Keywords: symmetric matrix; determinant; Hopf bundle; Hopf invariant AMS Subject Classification (2020): Primary: 15A15 ; Secondary: 15A24; 30C10; 22E47. 1. Introduction This short paper, whose nature is mainly expository, is dedicated to the memory of Professor Dr. Aurel Bejancu, who was equally a gifted mathematician and a dedicated professor. As we hereby present an in memoriam paper, we feel it would be better if we choose an informal presentation rather than a classical structured discourse, where theorems and propositions are followed by their proofs.
    [Show full text]
  • Inner Product Spaces
    CHAPTER 6 Woman teaching geometry, from a fourteenth-century edition of Euclid’s geometry book. Inner Product Spaces In making the definition of a vector space, we generalized the linear structure (addition and scalar multiplication) of R2 and R3. We ignored other important features, such as the notions of length and angle. These ideas are embedded in the concept we now investigate, inner products. Our standing assumptions are as follows: 6.1 Notation F, V F denotes R or C. V denotes a vector space over F. LEARNING OBJECTIVES FOR THIS CHAPTER Cauchy–Schwarz Inequality Gram–Schmidt Procedure linear functionals on inner product spaces calculating minimum distance to a subspace Linear Algebra Done Right, third edition, by Sheldon Axler 164 CHAPTER 6 Inner Product Spaces 6.A Inner Products and Norms Inner Products To motivate the concept of inner prod- 2 3 x1 , x 2 uct, think of vectors in R and R as x arrows with initial point at the origin. x R2 R3 H L The length of a vector in or is called the norm of x, denoted x . 2 k k Thus for x .x1; x2/ R , we have The length of this vector x is p D2 2 2 x x1 x2 . p 2 2 x1 x2 . k k D C 3 C Similarly, if x .x1; x2; x3/ R , p 2D 2 2 2 then x x1 x2 x3 . k k D C C Even though we cannot draw pictures in higher dimensions, the gener- n n alization to R is obvious: we define the norm of x .x1; : : : ; xn/ R D 2 by p 2 2 x x1 xn : k k D C C The norm is not linear on Rn.
    [Show full text]