MATH 205 HOMEWORK #5 OFFICIAL SOLUTION Problem 1: an Inner Product on a Vector Space V Over F Is a Bilinear Map 〈·,·〉

Total Page:16

File Type:pdf, Size:1020Kb

MATH 205 HOMEWORK #5 OFFICIAL SOLUTION Problem 1: an Inner Product on a Vector Space V Over F Is a Bilinear Map 〈·,·〉 MATH 205 HOMEWORK #5 OFFICIAL SOLUTION Problem 1: An inner product on a vector space V over F is a bilinear map h·; ·i : V × V ! F satisfying the extra conditions •h v; wi = hw; vi, and •h v; vi ≥ 0, with equality if and only if v = 0. n (a) Show that the standard dot product on R is an inner product. R 1 (b) Show that (f; g) 7! f(x)g(x) dx is an inner product on C ([0; 1]; R). (c) Suppose that F is ordered. Prove that for any v; w 2 V , hv; wi2 ≤ hv; vihw; wi: When does equality hold? What standard inequality in trigonometry does this reflect when n V = R ? (d) We say that two vectors v; w in V are orthogonal if hv; wi = 0. Suppose that T : V ! V is a linear transformation satisfying hT v; wi = hv; T wi for all v; w. Show that eigenvectors of T with different eigenvalues are orthogonal. Solution: Note that if we can show that a bilinear form is symmetric, then it is linear in the first variable if and only if it is linear in the second. Thus showing linearity in one variable is enough. (a) Let v = (a1; : : : ; an) and w = (b1; : : : ; bn). The standard dot product is symmetric, as hv; wi = a1b1 + ··· + anbn = b1a1 + ··· + bnan = hw; vi: Let λ 2 R and u = (c1; : : : ; cn). Then hv+λu, wi = (a1+λc1)b1+···+(an+λcn)bn = a1b1+···+anbn+λ(c1b1+···+cnbn) = hv; wi+λhu; wi: Thus h·; ·i is bilinear. In addition, 2 2 hv; vi = a1 + ··· + an ≥ 0; with equality if and only if ai = 0 for all i. Thus the standard dot product is an inner products, as desired. (b) Note that Z Z hf; gi = f(x)g(x) dx = g(x)f(x) dx = hg; fi; 1 so that h·; ·i is symmetric. Let λ 2 R, h 2 C ([0; 1]; R). Then Z Z Z (f(x) + λh(x))g(x) dx = f(x)g(x) dx + λ h(x)g(x) dx; so that h·; ·i is bilinear. Lastly, suppose f(x) 6= 0, so that there exists x0 such that f(x0) 6= 0. Then there exists some δ such that jf(x)j > jf(x0)=2j for jx − x0j < δ. Then Z 1 Z x0+δ 2 2 2 f(x) dx ≥ jf(x0)=2j dx = jf(x0)=2j δ > 0: 0 x0 Thus if f 6= 0 then hf; fi > 0; clearly if f = 0 hf; fi = 0. Thus the second property of an inner product holds as well, and we are done. 1 2 MATH 205 HOMEWORK #5 OFFICIAL SOLUTION (c) First note that if either v or u is 0 then the inequality is an equality and holds trivially; thus we now consider the case when the two are nonzero. Let w = u − hu; vi=hv; viv. Then we have hw; vi = hu; vi − hu; vi = 0: Now consider 0 ≤ hw; wi = hu; ui − 2hu; vi2=hv; vi + hu; vi2=hv; vi = hu; ui − hu; vi2=hv; vi: Rearranging this, we get hu; uihv; vi ≥ hu; vi2; as desired. Equality will hold exactly when w = 0, which means that u is a scalar multiple of v. Note that in the classical case, this reduces to the inequality cos2 θ ≤ 1. (d) Suppose that u is an eigenvector of T with eigenvalue λ and v is an eigenvector of T with eigenvalue ρ. Then λhu; vi = hT u; vi = hu; T vi = ρhu; vi: Thus if λ 6= ρ we must have hu; vi = 0. In other words, u and v are orthogonal. Problem 2: Show that (F ⊕∞)∗ =∼ F ×∞. Conclude that it is not the case that V and V ∗ are always isomorphic. Solution: Suppose that f 2 (F ⊕∞)∗. Then f is uniquely determined by its value on a basis of ⊕∞ 1 F . Taking the standard basis feigi=1, where ei has a 1 in the ith position and 0s elsewhere, we see that the data of f consists exactly of a scalar in F for each i 2 I. These add and scale pointwise. Thus f corresponds exactly to a vector in F ×∞. Since F ×∞ is never isomorphic to F ⊕∞ we see that V ∗ is not always isomorphic to V . Problem 3: Suppose that F 0 is a field containing F and V is an F -vector space. If we consider F 0 to be an F -vector space, we can form the tensor product F 0 ⊗ V , which is naturally an F -vector space. Show that it is also an F 0-vector space. This is called the change of base of V . Solution: In order to show that F 0⊗V is an F 0-vector space we need to define scalar multiplication. For a pure tensor α ⊗ v we define scalar multiplication by λ 2 F 0 by λ(α ⊗ v) = (λα) ⊗ v: If this is well-defined then it is clearly associative, since multiplication in F 0 is associative. Similarly, it is unital: if λ = 1 then it clearly returns the same vector. We also have λ(α ⊗ v + β ⊗ w) = (λα) ⊗ v + (λβ) ⊗ w and (λ + ρ)(α ⊗ v) = ((λ + ρ)α) ⊗ v = (λα + ρα) ⊗ v = (λα) ⊗ v + (ρα) ⊗ v: Thus if scalar multiplication is well-defined then it satisfies the axioms of a vector space. We have defined scalar multiplication by its action on the generators of F 0 ⊗ V . To show that this is well-defined we need to check that it preserves the relations; in other words, that the scalar MATH 205 HOMEWORK #5 OFFICIAL SOLUTION 3 multiple of any element in A0 stays in A0. We therefore check: λ((α + β) ⊗ v − α ⊗ v − β ⊗ v) = (λ(α + beta)) ⊗ v − (λα) ⊗ v − (λβ) ⊗ v = ((λα) + (λβ)) ⊗ v − (λα) ⊗ v − (λβ) ⊗ v 2 A0: 0 0 0 0 λ(α ⊗ (v + v ) − α ⊗ v − α ⊗ v ) = (λα) ⊗ (v + v ) − (λα) ⊗ v − (λα) ⊗ v 2 A0: λ((aα) ⊗ v − a(α ⊗ v)) = (λaα) ⊗ v − (λa)(α ⊗ v) = (aλα) ⊗ v − (a(λ(α ⊗ v))) = (aλα) ⊗ v − a((λα) ⊗ v) 2 A0: λ(α ⊗ (av) − a(α ⊗ v)) = (λα) ⊗ (av) − a(λ(α ⊗ v)) = (λα) ⊗ (av) − a((λα) ⊗ v) 2 A0: 0 0 Thus multiplication by λ preserves A0, and is thus well-defined on F ⊗ V . Thus F ⊗ V is an F 0-vector space. Problem 4: Prove the universal property for tensor products. In other words, show that for any vector spaces U, V , and W there is a bijection bilinear maps linear maps ! : U × V ! W U ⊗ V ! W Solution: Note that there exists a bilinear map ' : U × V ! U ⊗ V which maps (u; v) to u ⊗ v. Thus we have a function linear maps bilinear maps R: −! U ⊗ V ! W U × V ! W defined by mapping a linear map T : U ⊗ V ! W to the bilinear map T ◦ '. We need to show that this map is a bijection. FIrst we check that R is injective. Recall that S = fu ⊗ v j u 2 U; v 2 V g is a spanning set for U ⊗ V ; thus any linear map T : U ⊗ V ! W is uniquely defined by its values on S. if 0 0 0 T;T : U ⊗ V ! W satisfy T ◦ ' = T ◦ '; then by definition T jS = T jS; thus T = T . Thus R is injective. Now we need to check that R is surjective. Let T : U × V ! W be a bilinear map. We need to show that there exists T 0 : U ⊗ V ! W such that T 0 ◦ ' = T . We define T 0(u ⊗ v) = T (u; v): If this is well-defined then it satisfies the desired condition; thus we need to check that it is zero on A0. We check this on the generators of A0: T 0((u + u0) ⊗ v − u ⊗ v − u0 ⊗ v0) = T (u + u0; v) − T (u; v) − T (u0; v) = T (u + u0; v) − T (u + u0; v) = 0 T 0(u ⊗ (v + v0) − u ⊗ v − u ⊗ v0) = T (u; v + v0) − T (u; v) − T (u; v0) = T (u; v + v0) − T (u; v + v0) = 0 T 0((λu) ⊗ v − λ(u ⊗ v)) = T (λu, v) − λT (u; v) = λT (u; v) − λT (u; v) = 0 T 0(u ⊗ (λv) − λ(u ⊗ v)) = T (u; λv) − λT (u; v) = λT (u; v) − λT (u; v) = 0: Thus it is well-defined and R is surjective. m n Problem 5: Let fuigi=1 and fvjgj=1 be bases of U and V , respectively. Show that a general Pm Pn element w = i=1 j=1 wijui ⊗ vj is the sum of r pure tensors if and only if the m × n matrix (wij) has rank at most r. 4 MATH 205 HOMEWORK #5 OFFICIAL SOLUTION m Solution: Note that if we replace the basis fuigi=1 with the basis (u1; : : : ; λui + uj; ui+1; : : : ; um) then the relationship between the matrix (wij) and the matrix for w written in terms of the new −1 basis is the same, except that column j turns into Cj − λ Ci. Thus we can do column reductions m on the matrix (wij) by replacing the basis fuigi=1. Similarly, we can do row reductions by replacing n the basis fvjgj=1. A matrix has rank r if and only if it can be reduced to a matrix with r 1's, with no two 1's in any row or column, using row and column reductions. If the matrix (wij) has only r entries then we can rewrite w as a sum of r pure tensors.
Recommended publications
  • Equivalence of A-Approximate Continuity for Self-Adjoint
    Equivalence of A-Approximate Continuity for Self-Adjoint Expansive Linear Maps a,1 b, ,2 Sz. Gy. R´ev´esz , A. San Antol´ın ∗ aA. R´enyi Institute of Mathematics, Hungarian Academy of Sciences, Budapest, P.O.B. 127, 1364 Hungary bDepartamento de Matem´aticas, Universidad Aut´onoma de Madrid, 28049 Madrid, Spain Abstract Let A : Rd Rd, d 1, be an expansive linear map. The notion of A-approximate −→ ≥ continuity was recently used to give a characterization of scaling functions in a multiresolution analysis (MRA). The definition of A-approximate continuity at a point x – or, equivalently, the definition of the family of sets having x as point of A-density – depend on the expansive linear map A. The aim of the present paper is to characterize those self-adjoint expansive linear maps A , A : Rd Rd for which 1 2 → the respective concepts of Aµ-approximate continuity (µ = 1, 2) coincide. These we apply to analyze the equivalence among dilation matrices for a construction of systems of MRA. In particular, we give a full description for the equivalence class of the dyadic dilation matrix among all self-adjoint expansive maps. If the so-called “four exponentials conjecture” of algebraic number theory holds true, then a similar full description follows even for general self-adjoint expansive linear maps, too. arXiv:math/0703349v2 [math.CA] 7 Sep 2007 Key words: A-approximate continuity, multiresolution analysis, point of A-density, self-adjoint expansive linear map. 1 Supported in part in the framework of the Hungarian-Spanish Scientific and Technological Governmental Cooperation, Project # E-38/04.
    [Show full text]
  • LECTURES on PURE SPINORS and MOMENT MAPS Contents 1
    LECTURES ON PURE SPINORS AND MOMENT MAPS E. MEINRENKEN Contents 1. Introduction 1 2. Volume forms on conjugacy classes 1 3. Clifford algebras and spinors 3 4. Linear Dirac geometry 8 5. The Cartan-Dirac structure 11 6. Dirac structures 13 7. Group-valued moment maps 16 References 20 1. Introduction This article is an expanded version of notes for my lectures at the summer school on `Poisson geometry in mathematics and physics' at Keio University, Yokohama, June 5{9 2006. The plan of these lectures was to give an elementary introduction to the theory of Dirac structures, with applications to Lie group valued moment maps. Special emphasis was given to the pure spinor approach to Dirac structures, developed in Alekseev-Xu [7] and Gualtieri [20]. (See [11, 12, 16] for the more standard approach.) The connection to moment maps was made in the work of Bursztyn-Crainic [10]. Parts of these lecture notes are based on a forthcoming joint paper [1] with Anton Alekseev and Henrique Bursztyn. I would like to thank the organizers of the school, Yoshi Maeda and Guiseppe Dito, for the opportunity to deliver these lectures, and for a greatly enjoyable meeting. I also thank Yvette Kosmann-Schwarzbach and the referee for a number of helpful comments. 2. Volume forms on conjugacy classes We will begin with the following FACT, which at first sight may seem quite unrelated to the theme of these lectures: FACT. Let G be a simply connected semi-simple real Lie group. Then every conjugacy class in G carries a canonical invariant volume form.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • NOTES on DIFFERENTIAL FORMS. PART 3: TENSORS 1. What Is A
    NOTES ON DIFFERENTIAL FORMS. PART 3: TENSORS 1. What is a tensor? 1 n Let V be a finite-dimensional vector space. It could be R , it could be the tangent space to a manifold at a point, or it could just be an abstract vector space. A k-tensor is a map T : V × · · · × V ! R 2 (where there are k factors of V ) that is linear in each factor. That is, for fixed ~v2; : : : ;~vk, T (~v1;~v2; : : : ;~vk−1;~vk) is a linear function of ~v1, and for fixed ~v1;~v3; : : : ;~vk, T (~v1; : : : ;~vk) is a k ∗ linear function of ~v2, and so on. The space of k-tensors on V is denoted T (V ). Examples: n • If V = R , then the inner product P (~v; ~w) = ~v · ~w is a 2-tensor. For fixed ~v it's linear in ~w, and for fixed ~w it's linear in ~v. n • If V = R , D(~v1; : : : ;~vn) = det ~v1 ··· ~vn is an n-tensor. n • If V = R , T hree(~v) = \the 3rd entry of ~v" is a 1-tensor. • A 0-tensor is just a number. It requires no inputs at all to generate an output. Note that the definition of tensor says nothing about how things behave when you rotate vectors or permute their order. The inner product P stays the same when you swap the two vectors, but the determinant D changes sign when you swap two vectors. Both are tensors. For a 1-tensor like T hree, permuting the order of entries doesn't even make sense! ~ ~ Let fb1;:::; bng be a basis for V .
    [Show full text]
  • A Note on Bilinear Forms
    A note on bilinear forms Darij Grinberg July 9, 2020 Contents 1. Introduction1 2. Bilinear maps and forms1 3. Left and right orthogonal spaces4 4. Interlude: Symmetric and antisymmetric bilinear forms 14 5. The morphism of quotients I 15 6. The morphism of quotients II 20 7. More on orthogonal spaces 24 1. Introduction In this note, we shall prove some elementary linear-algebraic properties of bi- linear forms. These properties generalize some of the standard facts about non- degenerate bilinear forms on finite-dimensional vector spaces (e.g., the fact that ? V? = V for a vector subspace V of a vector space A equipped with a nonde- generate symmetric bilinear form) to bilinear forms which may be degenerate. They are unlikely to be new, but I have not found them explored anywhere, whence this note. 2. Bilinear maps and forms We fix a field k. This field will be fixed for the rest of this note. The notion of a “vector space” will always be understood to mean “k-vector space”. The 1 A note on bilinear forms July 9, 2020 word “subspace” will always mean “k-vector subspace”. The word “linear” will always mean “k-linear”. If V and W are two k-vector spaces, then Hom (V, W) denotes the vector space of all linear maps from V to W. If S is a subset of a vector space V, then span S will mean the span of S (that is, the subspace of V spanned by S). We recall the definition of a bilinear map: Definition 2.1.
    [Show full text]
  • Tensor Products
    Tensor products Joel Kamnitzer April 5, 2011 1 The definition Let V, W, X be three vector spaces. A bilinear map from V × W to X is a function H : V × W → X such that H(av1 + v2, w) = aH(v1, w) + H(v2, w) for v1,v2 ∈ V, w ∈ W, a ∈ F H(v, aw1 + w2) = aH(v, w1) + H(v, w2) for v ∈ V, w1, w2 ∈ W, a ∈ F Let V and W be vector spaces. A tensor product of V and W is a vector space V ⊗ W along with a bilinear map φ : V × W → V ⊗ W , such that for every vector space X and every bilinear map H : V × W → X, there exists a unique linear map T : V ⊗ W → X such that H = T ◦ φ. In other words, giving a linear map from V ⊗ W to X is the same thing as giving a bilinear map from V × W to X. If V ⊗ W is a tensor product, then we write v ⊗ w := φ(v ⊗ w). Note that there are two pieces of data in a tensor product: a vector space V ⊗ W and a bilinear map φ : V × W → V ⊗ W . Here are the main results about tensor products summarized in one theorem. Theorem 1.1. (i) Any two tensor products of V, W are isomorphic. (ii) V, W has a tensor product. (iii) If v1,...,vn is a basis for V and w1,...,wm is a basis for W , then {vi ⊗ wj }1≤i≤n,1≤j≤m is a basis for V ⊗ W .
    [Show full text]
  • Baire Category Theorem and Uniform Boundedness Principle)
    Functional Analysis (WS 19/20), Problem Set 3 (Baire Category Theorem and Uniform Boundedness Principle) Baire Category Theorem B1. Let (X; k·kX ) be an innite dimensional Banach space. Prove that X has uncountable Hamel basis. Note: This is Problem A2 from Problem Set 1. B2. Consider subset of bounded sequences 1 A = fx 2 l : only nitely many xk are nonzerog : Can one dene a norm on A so that it becomes a Banach space? Consider the same question with the set of polynomials dened on interval [0; 1]. B3. Prove that the set L2(0; 1) has empty interior as the subset of Banach space L1(0; 1). B4. Let f : [0; 1) ! [0; 1) be a continuous function such that for every x 2 [0; 1), f(kx) ! 0 as k ! 1. Prove that f(x) ! 0 as x ! 1. B5.( Uniform Boundedness Principle) Let (X; k · kX ) be a Banach space and (Y; k · kY ) be a normed space. Let fTαgα2A be a family of bounded linear operators between X and Y . Suppose that for any x 2 X, sup kTαxkY < 1: α2A Prove that . supα2A kTαk < 1 Uniform Boundedness Principle U1. Let F be a normed space C[0; 1] with L2(0; 1) norm. Check that the formula 1 Z n 'n(f) = n f(t) dt 0 denes a bounded linear functional on . Verify that for every , F f 2 F supn2N j'n(f)j < 1 but . Why Uniform Boundedness Principle is not satised in this case? supn2N k'nk = 1 U2.( pointwise convergence of operators) Let (X; k · kX ) be a Banach space and (Y; k · kY ) be a normed space.
    [Show full text]
  • 3 Bilinear Forms & Euclidean/Hermitian Spaces
    3 Bilinear Forms & Euclidean/Hermitian Spaces Bilinear forms are a natural generalisation of linear forms and appear in many areas of mathematics. Just as linear algebra can be considered as the study of `degree one' mathematics, bilinear forms arise when we are considering `degree two' (or quadratic) mathematics. For example, an inner product is an example of a bilinear form and it is through inner products that we define the notion of length in n p 2 2 analytic geometry - recall that the length of a vector x 2 R is defined to be x1 + ... + xn and that this formula holds as a consequence of Pythagoras' Theorem. In addition, the `Hessian' matrix that is introduced in multivariable calculus can be considered as defining a bilinear form on tangent spaces and allows us to give well-defined notions of length and angle in tangent spaces to geometric objects. Through considering the properties of this bilinear form we are able to deduce geometric information - for example, the local nature of critical points of a geometric surface. In this final chapter we will give an introduction to arbitrary bilinear forms on K-vector spaces and then specialise to the case K 2 fR, Cg. By restricting our attention to thse number fields we can deduce some particularly nice classification theorems. We will also give an introduction to Euclidean spaces: these are R-vector spaces that are equipped with an inner product and for which we can `do Euclidean geometry', that is, all of the geometric Theorems of Euclid will hold true in any arbitrary Euclidean space.
    [Show full text]
  • Bornologically Isomorphic Representations of Tensor Distributions
    Bornologically isomorphic representations of distributions on manifolds E. Nigsch Thursday 15th November, 2018 Abstract Distributional tensor fields can be regarded as multilinear mappings with distributional values or as (classical) tensor fields with distribu- tional coefficients. We show that the corresponding isomorphisms hold also in the bornological setting. 1 Introduction ′ ′ ′r s ′ Let D (M) := Γc(M, Vol(M)) and Ds (M) := Γc(M, Tr(M) ⊗ Vol(M)) be the strong duals of the space of compactly supported sections of the volume s bundle Vol(M) and of its tensor product with the tensor bundle Tr(M) over a manifold; these are the spaces of scalar and tensor distributions on M as defined in [?, ?]. A property of the space of tensor distributions which is fundamental in distributional geometry is given by the C∞(M)-module isomorphisms ′r ∼ s ′ ∼ r ′ Ds (M) = LC∞(M)(Tr (M), D (M)) = Ts (M) ⊗C∞(M) D (M) (1) (cf. [?, Theorem 3.1.12 and Corollary 3.1.15]) where C∞(M) is the space of smooth functions on M. In[?] a space of Colombeau-type nonlinear generalized tensor fields was constructed. This involved handling smooth functions (in the sense of convenient calculus as developed in [?]) in par- arXiv:1105.1642v1 [math.FA] 9 May 2011 ∞ r ′ ticular on the C (M)-module tensor products Ts (M) ⊗C∞(M) D (M) and Γ(E) ⊗C∞(M) Γ(F ), where Γ(E) denotes the space of smooth sections of a vector bundle E over M. In[?], however, only minor attention was paid to questions of topology on these tensor products.
    [Show full text]
  • Representation of Bilinear Forms in Non-Archimedean Hilbert Space by Linear Operators
    Comment.Math.Univ.Carolin. 47,4 (2006)695–705 695 Representation of bilinear forms in non-Archimedean Hilbert space by linear operators Toka Diagana This paper is dedicated to the memory of Tosio Kato. Abstract. The paper considers representing symmetric, non-degenerate, bilinear forms on some non-Archimedean Hilbert spaces by linear operators. Namely, upon making some assumptions it will be shown that if φ is a symmetric, non-degenerate bilinear form on a non-Archimedean Hilbert space, then φ is representable by a unique self-adjoint (possibly unbounded) operator A. Keywords: non-Archimedean Hilbert space, non-Archimedean bilinear form, unbounded operator, unbounded bilinear form, bounded bilinear form, self-adjoint operator Classification: 47S10, 46S10 1. Introduction Representing bounded or unbounded, symmetric, bilinear forms by linear op- erators is among the most attractive topics in representation theory due to its significance and its possible applications. Applications include those arising in quantum mechanics through the study of the form sum associated with the Hamil- tonians, mathematical physics, symplectic geometry, variational methods through the study of weak solutions to some partial differential equations, and many oth- ers, see, e.g., [3], [7], [10], [11]. This paper considers representing symmetric, non- degenerate, bilinear forms defined over the so-called non-Archimedean Hilbert spaces Eω by linear operators as it had been done for closed, positive, symmetric, bilinear forms in the classical setting, see, e.g., Kato [11, Chapter VI, Theo- rem 2.23, p. 331]. Namely, upon making some assumptions it will be shown that if φ : D(φ) × D(φ) ⊂ Eω × Eω 7→ K (K being the ground field) is a symmetric, non-degenerate, bilinear form, then there exists a unique self-adjoint (possibly unbounded) operator A such that (1.1) φ(u, v)= hAu, vi, ∀ u ∈ D(A), v ∈ D(φ) where D(A) and D(φ) denote the domains of A and φ, respectively.
    [Show full text]
  • Determinants in Geometric Algebra
    Determinants in Geometric Algebra Eckhard Hitzer 16 June 2003, recovered+expanded May 2020 1 Definition Let f be a linear map1, of a real linear vector space Rn into itself, an endomor- phism n 0 n f : a 2 R ! a 2 R : (1) This map is extended by outermorphism (symbol f) to act linearly on multi- vectors f(a1 ^ a2 ::: ^ ak) = f(a1) ^ f(a2) ::: ^ f(ak); k ≤ n: (2) By definition f is grade-preserving and linear, mapping multivectors to mul- tivectors. Examples are the reflections, rotations and translations described earlier. The outermorphism of a product of two linear maps fg is the product of the outermorphisms f g f[g(a1)] ^ f[g(a2)] ::: ^ f[g(ak)] = f[g(a1) ^ g(a2) ::: ^ g(ak)] = f[g(a1 ^ a2 ::: ^ ak)]; (3) with k ≤ n. The square brackets can safely be omitted. The n{grade pseudoscalars of a geometric algebra are unique up to a scalar factor. This can be used to define the determinant2 of a linear map as det(f) = f(I)I−1 = f(I) ∗ I−1; and therefore f(I) = det(f)I: (4) For an orthonormal basis fe1; e2;:::; eng the unit pseudoscalar is I = e1e2 ::: en −1 q q n(n−1)=2 with inverse I = (−1) enen−1 ::: e1 = (−1) (−1) I, where q gives the number of basis vectors, that square to −1 (the linear space is then Rp;q). According to Grassmann n-grade vectors represent oriented volume elements of dimension n. The determinant therefore shows how these volumes change under linear maps.
    [Show full text]
  • Glossary of Linear Algebra Terms
    INNER PRODUCT SPACES AND THE GRAM-SCHMIDT PROCESS A. HAVENS 1. The Dot Product and Orthogonality 1.1. Review of the Dot Product. We first recall the notion of the dot product, which gives us a familiar example of an inner product structure on the real vector spaces Rn. This product is connected to the Euclidean geometry of Rn, via lengths and angles measured in Rn. Later, we will introduce inner product spaces in general, and use their structure to define general notions of length and angle on other vector spaces. Definition 1.1. The dot product of real n-vectors in the Euclidean vector space Rn is the scalar product · : Rn × Rn ! R given by the rule n n ! n X X X (u; v) = uiei; viei 7! uivi : i=1 i=1 i n Here BS := (e1;:::; en) is the standard basis of R . With respect to our conventions on basis and matrix multiplication, we may also express the dot product as the matrix-vector product 2 3 v1 6 7 t î ó 6 . 7 u v = u1 : : : un 6 . 7 : 4 5 vn It is a good exercise to verify the following proposition. Proposition 1.1. Let u; v; w 2 Rn be any real n-vectors, and s; t 2 R be any scalars. The Euclidean dot product (u; v) 7! u · v satisfies the following properties. (i:) The dot product is symmetric: u · v = v · u. (ii:) The dot product is bilinear: • (su) · v = s(u · v) = u · (sv), • (u + v) · w = u · w + v · w.
    [Show full text]