Vector Spaces, Duals and Endomorphisms

Total Page:16

File Type:pdf, Size:1020Kb

Vector Spaces, Duals and Endomorphisms Vector spaces, duals and endomorphisms A real vector space V is a set equipped with an additive operation which is commutative and associative, has a zero element 0 and has an additive inverse −v for any v 2 V (so V is an abelian group under addition). Further there is an operation of multiplication of the reals on the vectors (r; v) ! rv 2 V, for each real r and each vector v 2 V, called scalar multiplication, which obeys, for any reals r and s and any vectors v 2 V and w 2 V, the relations: 0v = 0; 1v = v; (−1)v = −v; r(sv) = (rs)v; (r + s)v = rs + sv; r(v + w) = rv + rw: The trivial vector space, said to be of zero dimensions, is a vector space consisting of only the zero vector. The basic family of non-trivial examples of a vector space are the spaces Rn, n 2 N. Here Rn consists of all n-tuples x = (x1; x2; : : : ; xn), with each xi real. The operations of Rn, valid for any reals xi; yi; i = 1; 2; : : : ; n and any real s are: x + y = (x1; x2; : : : ; xn) + (y1; y2; : : : ; yn) = (x1 + y1; x2 + y2; : : : ; xn + yn); x = (sx1; sx2; : : : ; sxn): The zero element is 0 = (0; 0;:::; 0) (often this element is just written 0). The additive inverse of x is −x = (−x1; −x2;:::; −xn). Given a pair of vector spaces V and W a map f : V ! W is said to be linear if f(rx + sy) = rf(x) + sf(y), for any reals r and s and any x 2 V and y 2 V. In particular we have f(0) = 0 and f(−v) = −f(v), for any v 2 V. Denote the space of linear maps from V to W by Hom(V; W). If f and g are in Hom(V; W), their sum map is defined by (f + g)(v) = f(v) + g(v), for any v 2 V. Also we can multiply f 2 Hom(V; W) by a real scalar r, giving the map rf, such that (rf)(v) = rf(v) for any v 2 V. Then f + g and rf each lie in Hom(V; W) and these operations give Hom(V; W) the natural structure of a vector space. If f 2 Hom(V; W) and g 2 Hom(W; X), for vector spaces V; W and X, then the composition g ◦f : V ! X is well-defined and linear, so lies in Hom(V; X). Also the composition map (f; g) ! f ◦ g is linear in each argument. 1 A linear map f 2 Hom(V; W) is said to be an epimorphism if f is surjective, a monomorphism if f is injective (if and only if the equation f(v) = 0 has as its only solution the vector v = 0 and an isomorphism if f is bijective, in −1 −1 −1 which case f has an inverse f such that f ◦ f = idW and f ◦ f = idV are the identity maps (each of the latter is an isomorphism, each its own inverse). All trivial vector spaces are isomorphic. The space Hom(Rn; Rm) is isomorphic to the space Rmn. An element f 2 Hom(Rn; Rm) is given by the n i Pn i j formula, for any x 2 R , f(x) = y where y = j=1 fj x ; i = 1; 2 : : : ; m, for i i an m by n matrix fj . Then f is surjective if and only if the matrix fj has rank m and is injective, if and only if there are no solutions to the matrix i j i equation fj x = 0, except for the solution x = 0; i = 1; 2; : : : ; n. Then f is an isomorphism if and only if m = n and the equation f(x) = 0 has as its only solution the vector x = 0, if and only if m = n and f has rank n. A linear map from V to itself is called an endomorphism. Then the space Hom(V; V) of all endomorphisms of V is an algebra, with associative multi- plication (distributive over addition) given by composition. The space Hom(R; V) is naturally isomorphic to V itself: simply map f in Hom(R; V) to f(1) 2 V. The space Hom(V; R) is called the dual vector space of V and is written V∗. If f 2 Hom(V; W) and α 2 Hom(W; R), then f ∗(α) = α ◦ f is an ele- ment of Hom(V; R). As α 2 W∗ varies, f ∗(α) depends linearly on α, so f ∗ gives a linear map from W∗ to V∗. The map f ∗ 2 Hom(W∗; V∗) is called the adjoint of the map f. Then the map f ! f ∗ is linear and (g ◦ f)∗ = f ∗ ◦ g∗, for any f 2 Hom(V; W) and g 2 Hom(W; X) and any vector spaces V, W and X. The adjoint of an identity map is itself, so the adjoint of an isomorphism is an isomorphism. 2 Bases and finite dimensionality A vector space is said to be finite dimensional if there is an isomorphism f : V ! Rn, for some integer n. Any such isomorphism is called a basis for V. If v 2 V and f is a basis, then f(v) 2 Rn is called the co-ordinate vector of v in the basis f. The basis elements of Rn are the defined to be j the n-vectors, fei; i = 1; 2; : : : ; ng, such that the j-the entry of ei is δi , the Kronecker delta, so is 1 if j = i and zero otherwise. Given a basis f : V ! Rn of a vector space V, the corresponding basis −1 elements of V are the vectors ffi = f (ei); i = 1; 2; : : : ; ng. Then for each 1 2 n Pn i v 2 V, we have f(v) = v = (v ; v ; : : : ; v ) if and only if v = i=1 v fi. If f : V ! Rn and g : V ! Rm are bases for V, the map f ◦ g−1 : Rm ! Rn is an isomorphism (with inverse g ◦ f −1), so m = n. So an n 2 N, such that a basis f : V ! Rn exists, is unique. It is called the dimension of V. If f : V ! Rn is a basis, then the adjoint f ∗ :(Rn)∗ ! V∗ is an isomorphism, so (f ∗)−1 : V∗ ! (Rn)∗ is an isomorphism. Now (Rn)∗ = Hom(Rn; R), so is isomorphic to Rn itself. This isomorphism maps t 2 (Rn)∗ to the element n t = (t(e1); t(e2); : : : ; t(en)) of R , where e1; e2; : : : ; en are the standard basis elements for Rn. We call this isomorphism T . Then f : V ! Rn is a basis, the map f T = T ◦ (f ∗)−1 : V∗ ! Rn is a basis for V∗ called the dual basis to that of f. Then (f T )T = f. In particular V and V∗ have the same dimension. If v 2 V a vector space, then v gives an element v0 of (V∗)∗ = Hom(V∗; R) by the formula: 0 ∗ v (α) = α(v); for any α 2 V Then the map V ! (V∗)∗, v ! v0 is an injection and is an isomorphism if V is finite dimensional. Let V and W be vector spaces of dimensions n and m respectively. Let s : V ! Rn and t : W ! Rm be bases. Then if f 2 Hom(V; W), define µ(f) = t ◦ f ◦ s−1. Then µ(f) 2 Hom(Rn; Rm) so is represented by a ma- trix. If fsi; i = 1; 2; : : : ng are the basis elements of V for the basis s and ftj; j = 1; 2; : : : mg are the basis elements of W for the basis t, then we have: Pm j j f(si) = j=1 fi tj, where fi is the matrix of µ(f). If v 2 V has s-co-ordinate i Pn i j vector v, then f(v) has t-co-ordinate vector w, where w = j=1 fj v . 3 Tensor algebras Let V be a real vector space. The tensor algebra of V, denoted T (V), is the associative algebra with identity over the reals, spanned by all the monomials of length k, v1v2 : : : vk, for all integers k, where each vi lies in V, subject to the relations of V: • If av + bw + cx = 0, with v, w and x in V and a, b and c real numbers, then aαvβ + bαwβ + cαxβ = 0, for any tensors α and β. If a tensor is a linear combination of monomials all of the same length k, the tensor is said to be of type k. The vector space of tensors of type k is de- noted T k(V). We allow k = 0, in which case the tensor is just a real number. The tensors of type one are naturally identified with the vector space V itself. If µ : W ! V is a homomorphism of vector spaces, then there is a unique algebra homomorphism T (µ): T (W) !T (V), which reduces to µ when act- ing on W. Then µ maps each monomial w1w2 : : : wk to µ(w1)µ(w2) : : : µ(wk), for any w1; w2 : : : wk in W and any integer k. If also λ : X ! W is also a vector space homomorphism, then we have: T (µ ◦ λ) = T (µ) ◦ T (λ). n Using the standard basis elements fei; i = 1; 2; : : : ; ng of R , the tensor al- n gebra T (R ) has a natural basis given by the monomials ej1 ej2 : : : ejk where 1 ≤ jr ≤ n, for r = 1; 2; : : : ; k and any k 2 N, together with the number 1, the natural basis for tensors of type zero.
Recommended publications
  • Tensor Manipulation in GPL Maxima
    Tensor Manipulation in GPL Maxima Viktor Toth http://www.vttoth.com/ February 1, 2008 Abstract GPL Maxima is an open-source computer algebra system based on DOE-MACSYMA. GPL Maxima included two tensor manipulation packages from DOE-MACSYMA, but these were in various states of disrepair. One of the two packages, CTENSOR, implemented component-based tensor manipulation; the other, ITENSOR, treated tensor symbols as opaque, manipulating them based on their index properties. The present paper describes the state in which these packages were found, the steps that were needed to make the packages fully functional again, and the new functionality that was implemented to make them more versatile. A third package, ATENSOR, was also implemented; fully compatible with the identically named package in the commercial version of MACSYMA, ATENSOR implements abstract tensor algebras. 1 Introduction GPL Maxima (GPL stands for the GNU Public License, the most widely used open source license construct) is the descendant of one of the world’s first comprehensive computer algebra systems (CAS), DOE-MACSYMA, developed by the United States Department of Energy in the 1960s and the 1970s. It is currently maintained by 18 volunteer developers, and can be obtained in source or object code form from http://maxima.sourceforge.net/. Like other computer algebra systems, Maxima has tensor manipulation capability. This capability was developed in the late 1970s. Documentation is scarce regarding these packages’ origins, but a select collection of e-mail messages by various authors survives, dating back to 1979-1982, when these packages were actively maintained at M.I.T. When this author first came across GPL Maxima, the tensor packages were effectively non-functional.
    [Show full text]
  • MAS4107 Linear Algebra 2 Linear Maps And
    Introduction Groups and Fields Vector Spaces Subspaces, Linear . Bases and Coordinates MAS4107 Linear Algebra 2 Linear Maps and . Change of Basis Peter Sin More on Linear Maps University of Florida Linear Endomorphisms email: [email protected]fl.edu Quotient Spaces Spaces of Linear . General Prerequisites Direct Sums Minimal polynomial Familiarity with the notion of mathematical proof and some experience in read- Bilinear Forms ing and writing proofs. Familiarity with standard mathematical notation such as Hermitian Forms summations and notations of set theory. Euclidean and . Self-Adjoint Linear . Linear Algebra Prerequisites Notation Familiarity with the notion of linear independence. Gaussian elimination (reduction by row operations) to solve systems of equations. This is the most important algorithm and it will be assumed and used freely in the classes, for example to find JJ J I II coordinate vectors with respect to basis and to compute the matrix of a linear map, to test for linear dependence, etc. The determinant of a square matrix by cofactors Back and also by row operations. Full Screen Close Quit Introduction 0. Introduction Groups and Fields Vector Spaces These notes include some topics from MAS4105, which you should have seen in one Subspaces, Linear . form or another, but probably presented in a totally different way. They have been Bases and Coordinates written in a terse style, so you should read very slowly and with patience. Please Linear Maps and . feel free to email me with any questions or comments. The notes are in electronic Change of Basis form so sections can be changed very easily to incorporate improvements.
    [Show full text]
  • OBJ (Application/Pdf)
    GROUPOIDS WITH SEMIGROUP OPERATORS AND ADDITIVE ENDOMORPHISM A THESIS SUBMITTED TO THE FACULTY OF ATLANTA UNIVERSITY IN PARTIAL FULFILLMENT OF THE REQUIREMENTS POR THE DEGREE OF MASTER OF SCIENCE BY FRED HENDRIX HUGHES DEPARTMENT OF MATHEMATICS ATLANTA, GEORGIA JUNE 1965 TABLE OF CONTENTS Chapter Page I' INTRODUCTION. 1 II GROUPOIDS WITH SEMIGROUP OPERATORS 6 - Ill GROUPOIDS WITH ADDITIVE ENDOMORPHISM 12 BIBLIOGRAPHY 17 ii 4 CHAPTER I INTRODUCTION A set is an undefined termj however, one can say a set is a collec¬ tion of objects according to our sight and perception. Several authors use this definition, a set is a collection of definite distinct objects called elements. This concept is the foundation for mathematics. How¬ ever, it was not until the latter part of the nineteenth century when the concept was formally introduced. From this concept of a set, mathematicians, by placing restrictions on a set, have developed the algebraic structures which we employ. The structures are closely related as the diagram below illustrates. Quasigroup Set The first structure is a groupoid which the writer will discuss the following properties: subgroupiod, antigroupoid, expansive set homor- phism of groupoids in semigroups, groupoid with semigroupoid operators and groupoids with additive endormorphism. Definition 1.1. — A set of elements G - f x,y,z } which is defined by a single-valued binary operation such that x o y ■ z é G (The only restriction is closure) is called a groupiod. Definition 1.2. — The binary operation will be a mapping of the set into itself (AA a direct product.) 1 2 Definition 1.3» — A non-void subset of a groupoid G is called a subgroupoid if and only if AA C A.
    [Show full text]
  • Ring (Mathematics) 1 Ring (Mathematics)
    Ring (mathematics) 1 Ring (mathematics) In mathematics, a ring is an algebraic structure consisting of a set together with two binary operations usually called addition and multiplication, where the set is an abelian group under addition (called the additive group of the ring) and a monoid under multiplication such that multiplication distributes over addition.a[›] In other words the ring axioms require that addition is commutative, addition and multiplication are associative, multiplication distributes over addition, each element in the set has an additive inverse, and there exists an additive identity. One of the most common examples of a ring is the set of integers endowed with its natural operations of addition and multiplication. Certain variations of the definition of a ring are sometimes employed, and these are outlined later in the article. Polynomials, represented here by curves, form a ring under addition The branch of mathematics that studies rings is known and multiplication. as ring theory. Ring theorists study properties common to both familiar mathematical structures such as integers and polynomials, and to the many less well-known mathematical structures that also satisfy the axioms of ring theory. The ubiquity of rings makes them a central organizing principle of contemporary mathematics.[1] Ring theory may be used to understand fundamental physical laws, such as those underlying special relativity and symmetry phenomena in molecular chemistry. The concept of a ring first arose from attempts to prove Fermat's last theorem, starting with Richard Dedekind in the 1880s. After contributions from other fields, mainly number theory, the ring notion was generalized and firmly established during the 1920s by Emmy Noether and Wolfgang Krull.[2] Modern ring theory—a very active mathematical discipline—studies rings in their own right.
    [Show full text]
  • Electromagnetic Fields from Contact- and Symplectic Geometry
    ELECTROMAGNETIC FIELDS FROM CONTACT- AND SYMPLECTIC GEOMETRY MATIAS F. DAHL Abstract. We give two constructions that give a solution to the sourceless Maxwell's equations from a contact form on a 3-manifold. In both construc- tions the solutions are standing waves. Here, contact geometry can be seen as a differential geometry where the fundamental quantity (that is, the con- tact form) shows a constantly rotational behaviour due to a non-integrability condition. Using these constructions we obtain two main results. With the 3 first construction we obtain a solution to Maxwell's equations on R with an arbitrary prescribed and time independent energy profile. With the second construction we obtain solutions in a medium with a skewon part for which the energy density is time independent. The latter result is unexpected since usually the skewon part of a medium is associated with dissipative effects, that is, energy losses. Lastly, we describe two ways to construct a solution from symplectic structures on a 4-manifold. One difference between acoustics and electromagnetics is that in acoustics the wave is described by a scalar quantity whereas an electromagnetics, the wave is described by vectorial quantities. In electromagnetics, this vectorial nature gives rise to polar- isation, that is, in anisotropic medium differently polarised waves can have different propagation and scattering properties. To understand propagation in electromag- netism one therefore needs to take into account the role of polarisation. Classically, polarisation is defined as a property of plane waves, but generalising this concept to an arbitrary electromagnetic wave seems to be difficult. To understand propaga- tion in inhomogeneous medium, there are various mathematical approaches: using Gaussian beams (see [Dah06, Kac04, Kac05, Pop02, Ral82]), using Hadamard's method of a propagating discontinuity (see [HO03]) and using microlocal analysis (see [Den92]).
    [Show full text]
  • WOMP 2001: LINEAR ALGEBRA Reference Roman, S. Advanced
    WOMP 2001: LINEAR ALGEBRA DAN GROSSMAN Reference Roman, S. Advanced Linear Algebra, GTM #135. (Not very good.) 1. Vector spaces Let k be a field, e.g., R, Q, C, Fq, K(t),. Definition. A vector space over k is a set V with two operations + : V × V → V and · : k × V → V satisfying some familiar axioms. A subspace of V is a subset W ⊂ V for which • 0 ∈ W , • If w1, w2 ∈ W , a ∈ k, then aw1 + w2 ∈ W . The quotient of V by the subspace W ⊂ V is the vector space whose elements are subsets of the form (“affine translates”) def v + W = {v + w : w ∈ W } (for which v + W = v0 + W iff v − v0 ∈ W , also written v ≡ v0 mod W ), and whose operations +, · are those naturally induced from the operations on V . Exercise 1. Verify that our definition of the vector space V/W makes sense. Given a finite collection of elements (“vectors”) v1, . , vm ∈ V , their span is the subspace def hv1, . , vmi = {a1v1 + ··· amvm : a1, . , am ∈ k}. Exercise 2. Verify that this is a subspace. There may sometimes be redundancy in a spanning set; this is expressed by the notion of linear dependence. The collection v1, . , vm ∈ V is said to be linearly dependent if there is a linear combination a1v1 + ··· + amvm = 0, some ai 6= 0. This is equivalent to being able to express at least one of the vi as a linear combination of the others. Exercise 3. Verify this equivalence. Theorem. Let V be a vector space over a field k.
    [Show full text]
  • Classifying Categories the Jordan-Hölder and Krull-Schmidt-Remak Theorems for Abelian Categories
    U.U.D.M. Project Report 2018:5 Classifying Categories The Jordan-Hölder and Krull-Schmidt-Remak Theorems for Abelian Categories Daniel Ahlsén Examensarbete i matematik, 30 hp Handledare: Volodymyr Mazorchuk Examinator: Denis Gaidashev Juni 2018 Department of Mathematics Uppsala University Classifying Categories The Jordan-Holder¨ and Krull-Schmidt-Remak theorems for abelian categories Daniel Ahlsen´ Uppsala University June 2018 Abstract The Jordan-Holder¨ and Krull-Schmidt-Remak theorems classify finite groups, either as direct sums of indecomposables or by composition series. This thesis defines abelian categories and extends the aforementioned theorems to this context. 1 Contents 1 Introduction3 2 Preliminaries5 2.1 Basic Category Theory . .5 2.2 Subobjects and Quotients . .9 3 Abelian Categories 13 3.1 Additive Categories . 13 3.2 Abelian Categories . 20 4 Structure Theory of Abelian Categories 32 4.1 Exact Sequences . 32 4.2 The Subobject Lattice . 41 5 Classification Theorems 54 5.1 The Jordan-Holder¨ Theorem . 54 5.2 The Krull-Schmidt-Remak Theorem . 60 2 1 Introduction Category theory was developed by Eilenberg and Mac Lane in the 1942-1945, as a part of their research into algebraic topology. One of their aims was to give an axiomatic account of relationships between collections of mathematical structures. This led to the definition of categories, functors and natural transformations, the concepts that unify all category theory, Categories soon found use in module theory, group theory and many other disciplines. Nowadays, categories are used in most of mathematics, and has even been proposed as an alternative to axiomatic set theory as a foundation of mathematics.[Law66] Due to their general nature, little can be said of an arbitrary category.
    [Show full text]
  • Automated Operation Minimization of Tensor Contraction Expressions in Electronic Structure Calculations
    Automated Operation Minimization of Tensor Contraction Expressions in Electronic Structure Calculations Albert Hartono,1 Alexander Sibiryakov,1 Marcel Nooijen,3 Gerald Baumgartner,4 David E. Bernholdt,6 So Hirata,7 Chi-Chung Lam,1 Russell M. Pitzer,2 J. Ramanujam,5 and P. Sadayappan1 1 Dept. of Computer Science and Engineering 2 Dept. of Chemistry The Ohio State University, Columbus, OH, 43210 USA 3 Dept. of Chemistry, University of Waterloo, Waterloo, Ontario N2L BG1, Canada 4 Dept. of Computer Science 5 Dept. of Electrical and Computer Engineering Louisiana State University, Baton Rouge, LA 70803 USA 6 Computer Sci. & Math. Div., Oak Ridge National Laboratory, Oak Ridge, TN 37831 USA 7 Quantum Theory Project, University of Florida, Gainesville, FL 32611 USA Abstract. Complex tensor contraction expressions arise in accurate electronic struc- ture models in quantum chemistry, such as the Coupled Cluster method. Transfor- mations using algebraic properties of commutativity and associativity can be used to significantly decrease the number of arithmetic operations required for evalua- tion of these expressions, but the optimization problem is NP-hard. Operation mini- mization is an important optimization step for the Tensor Contraction Engine, a tool being developed for the automatic transformation of high-level tensor contraction expressions into efficient programs. In this paper, we develop an effective heuristic approach to the operation minimization problem, and demonstrate its effectiveness on tensor contraction expressions for coupled cluster equations. 1 Introduction Currently, manual development of accurate quantum chemistry models is very tedious and takes an expert several months to years to develop and debug. The Tensor Contrac- tion Engine (TCE) [2, 1] is a tool that is being developed to reduce the development time to hours/days, by having the chemist specify the computation in a high-level form, from which an efficient parallel program is automatically synthesized.
    [Show full text]
  • The Riemann Curvature Tensor
    The Riemann Curvature Tensor Jennifer Cox May 6, 2019 Project Advisor: Dr. Jonathan Walters Abstract A tensor is a mathematical object that has applications in areas including physics, psychology, and artificial intelligence. The Riemann curvature tensor is a tool used to describe the curvature of n-dimensional spaces such as Riemannian manifolds in the field of differential geometry. The Riemann tensor plays an important role in the theories of general relativity and gravity as well as the curvature of spacetime. This paper will provide an overview of tensors and tensor operations. In particular, properties of the Riemann tensor will be examined. Calculations of the Riemann tensor for several two and three dimensional surfaces such as that of the sphere and torus will be demonstrated. The relationship between the Riemann tensor for the 2-sphere and 3-sphere will be studied, and it will be shown that these tensors satisfy the general equation of the Riemann tensor for an n-dimensional sphere. The connection between the Gaussian curvature and the Riemann curvature tensor will also be shown using Gauss's Theorem Egregium. Keywords: tensor, tensors, Riemann tensor, Riemann curvature tensor, curvature 1 Introduction Coordinate systems are the basis of analytic geometry and are necessary to solve geomet- ric problems using algebraic methods. The introduction of coordinate systems allowed for the blending of algebraic and geometric methods that eventually led to the development of calculus. Reliance on coordinate systems, however, can result in a loss of geometric insight and an unnecessary increase in the complexity of relevant expressions. Tensor calculus is an effective framework that will avoid the cons of relying on coordinate systems.
    [Show full text]
  • On the Endomorphism Semigroup of Simple Algebras*
    GRATZER, G. Math. Annalen 170,334-338 (1967) On the Endomorphism Semigroup of Simple Algebras* GEORGE GRATZER * The preparation of this paper was supported by the National Science Foundation under grant GP-4221. i. Introduction. Statement of results Let (A; F) be an algebra andletE(A ;F) denotethe set ofall endomorphisms of(A ;F). Ifep, 1.p EE(A ;F)thenwe define ep •1.p (orep1.p) asusual by x(ep1.p) = (Xlp)1.p. Then (E(A; F); -> is a semigroup, called the endomorphism semigroup of (A; F). The identity mapping e is the identity element of (E(A; F); .). Theorem i. A semigroup (S; -> is isomorphic to the endomorphism semigroup of some algebra (A; F) if and only if (S; -> has an identity element. Theorem 1 was found in 1961 by the author and was semi~published in [4J which had a limited circulation in 1962. (The first published reference to this is in [5J.) A. G.WATERMAN found independently the same result and had it semi~ published in the lecture notes of G. BIRKHOFF on Lattice Theory in 1963. The first published proofofTheorem 1 was in [1 Jby M.ARMBRUST and J. SCHMIDT. The special case when (S; .) is a group was considered by G. BIRKHOFF in [2J and [3J. His second proof, in [3J, provides a proof ofTheorem 1; namely, we define the left~multiplicationsfa(x) == a . x as unary operations on Sand then the endomorphisms are the right~multiplicationsxepa = xa. We get as a corollary to Theorem 1BIRKHOFF'S result. Furthermore, noting that if a (ES) is right regular then fa is 1-1, we conclude Corollary 2.
    [Show full text]
  • Appendix a Relations Between Covariant and Contravariant Bases
    Appendix A Relations Between Covariant and Contravariant Bases The contravariant basis vector gk of the curvilinear coordinate of uk at the point P is perpendicular to the covariant bases gi and gj, as shown in Fig. A.1.This contravariant basis gk can be defined as or or a gk g  g ¼  ðA:1Þ i j oui ou j where a is the scalar factor; gk is the contravariant basis of the curvilinear coordinate of uk. Multiplying Eq. (A.1) by the covariant basis gk, the scalar factor a results in k k ðgi  gjÞ: gk ¼ aðg : gkÞ¼ad ¼ a ÂÃk ðA:2Þ ) a ¼ðgi  gjÞ : gk gi; gj; gk The scalar triple product of the covariant bases can be written as pffiffiffi a ¼ ½¼ðg1; g2; g3 g1  g2Þ : g3 ¼ g ¼ J ðA:3Þ where Jacobian J is the determinant of the covariant basis tensor G. The direction of the cross product vector in Eq. (A.1) is opposite if the dummy indices are interchanged with each other in Einstein summation convention. Therefore, the Levi-Civita permutation symbols (pseudo-tensor components) can be used in expression of the contravariant basis. ffiffiffi p k k g g ¼ J g ¼ðgi  gjÞ¼Àðgj  giÞ eijkðgi  gjÞ eijkðgi  gjÞ ðA:4Þ ) gk ¼ pffiffiffi ¼ g J where the Levi-Civita permutation symbols are defined by 8 <> þ1ifði; j; kÞ is an even permutation; eijk ¼ > À1ifði; j; kÞ is an odd permutation; : A:5 0ifi ¼ j; or i ¼ k; or j ¼ k ð Þ 1 , e ¼ ði À jÞÁðj À kÞÁðk À iÞ for i; j; k ¼ 1; 2; 3 ijk 2 H.
    [Show full text]
  • Gicheru3012018jamcs43211.Pdf
    Journal of Advances in Mathematics and Computer Science 30(1): 1-15, 2019; Article no.JAMCS.43211 ISSN: 2456-9968 (Past name: British Journal of Mathematics & Computer Science, Past ISSN: 2231-0851) Decomposition of Riemannian Curvature Tensor Field and Its Properties James Gathungu Gicheru1* and Cyrus Gitonga Ngari2 1Department of Mathematics, Ngiriambu Girls High School, P.O.Box 46-10100, Kianyaga, Kenya. 2Department of Mathematics, Computing and Information Technology, School of Pure and Applied Sciences, University of Embu, P.O.Box 6-60100, Embu, Kenya. Authors’ contributions This work was carried out in collaboration between both authors. Both authors read and approved the final manuscript. Article Information DOI: 10.9734/JAMCS/2019/43211 Editor(s): (1) Dr. Dragos-Patru Covei, Professor, Department of Applied Mathematics, The Bucharest University of Economic Studies, Piata Romana, Romania. (2) Dr. Sheng Zhang, Professor, Department of Mathematics, Bohai University, Jinzhou, China. Reviewers: (1) Bilal Eftal Acet, Adıyaman University, Turkey. (2) Çiğdem Dinçkal, Çankaya University, Turkey. (3) Alexandre Casassola Gonçalves, Universidade de São Paulo, Brazil. Complete Peer review History: http://www.sciencedomain.org/review-history/27899 Received: 07 July 2018 Accepted: 22 November 2018 Original Research Article Published: 21 December 2018 _______________________________________________________________________________ Abstract Decomposition of recurrent curvature tensor fields of R-th order in Finsler manifolds has been studied by B. B. Sinha and G. Singh [1] in the publications del’ institute mathematique, nouvelleserie, tome 33 (47), 1983 pg 217-220. Also Surendra Pratap Singh [2] in Kyungpook Math. J. volume 15, number 2 December, 1975 studied decomposition of recurrent curvature tensor fields in generalised Finsler spaces.
    [Show full text]