
Symbolical Index Reduction and Completion Rules for Importing Tensor Index Notation into Program- ming Languages Satoshi Egi Mathematics Subject Classification (2010). Primary 53-04; Secondary 68-04. Keywords. Tensor index notation; differential forms; scalar parameters; tensor parameters. Abstract. In mathematics, many notations have been invented for the concise representation of mathematical formulae. Tensor index notation is one of such notations and has been playing a crucial role in describing formulae in mathematical physics. This paper shows a programming language that can deal with symbolical tensor indices by introducing a set of tensor index rules that is compatible with two types of parameters, i.e., scalar and tensor parameters. When a tensor parameter obtains a tensor as an argument, the function treats the tensor argument as a whole. In contrast, when a scalar parameter obtains a tensor as an argument, the function is applied to each component of the tensor. On a language with scalar and tensor parameters, we can design a set of index reduction rules that allows users to use tensor index notation for arbitrary user- defined functions without requiring additional description. Furthermore, we can also design index completion rules that allow users to define the operators concisely for differential forms such as the wedge product, exterior derivative, and Hodge star operator. In our proposal, all these tensor operators are user-defined functions and can be passed as arguments of high-order functions. 1. Introduction From the latter half of the twentieth century, after the first implementation of a compiler for high-level programming languages, a large amount of notations have been invented in the field of programming arXiv:1804.03140v2 [cs.PL] 10 Feb 2021 languages. Lexical scoping [10], high-order functions [28, 36], and pattern matching [13, 16, 17] are features specific to programming invented for describing algorithms. At the same time, researchers evolve programming languages by importing successful mathematical notations. For example, deci- mal number system, function modularization, and infix notation for basic arithmetic operators have been imported in most programming languages [9]. The importation of mathematical notations into programming is not always easy. This is because the semantics of some mathematical notations are vague and complex to implement them as a part of programming languages. This paper discusses a method for importing tensor index notation invented by Ricci and Levi- Civita [32] for dealing with high-order tensors into programming languages. Tensor calculus often appears in the various fields of computer science. Tensor calculus is an important application of symbolic computation [25]. Tensor calculus is heavily used in computational physics [20] and computer visions [19]. Tensor calculus also appears in machine learning to handle multidimensional data [21]. The importation of tensor index notation makes programming in these fields easy. The current major method for dealing with tensors is using a special syntax for describing loops for generating multi-dimensional arrays. The Table [6] expression from the Wolfram language is such a 2 S. Egi syntax construct. Xij + Yij is represented as follows with the Table expression. The following program assumes that all dimensions corresponding to each index of the tensor are a constant M. Table[X[[i,j]] + Y[[i,j]],{i,M},{j,M}] For contracting tensors, we use the Sum [5] expression inside Table. XikYkj is represented as follows. Table[Sum[X[[i,k]] * Y[[k,j]], {k,M}], {i,M},{j,M}] This method has the advantage that we can use an arbitrary function defined for scalar values for tensor operations. The following Wolfram program represents @Xij=@xk. D is the differential function in the Wolfram language. Table[D[X[[i,j]],x[[k]]],{i,M},{j,M},{k,M}] Due to this advantage, the Wolfram language has been used by mathematicians in actual research. [27, 26] However, in this method, we cannot modularize tensor operators such as tensor multiplication by functions. Due to this restriction, we cannot syntactically distinguish applications of different tensor operators, such as tensor multiplication, wedge product, and Lie derivative, in programs. This is because we need to represent these tensor operators combining Table and Sum every time when we use them. Modularization by functions is also important for combining index notation with high- order functions. If we can pass tensor operators to high-order functions, we can represent a formula like \Xi1 Xi2 :::Xin " (the number of tensors multiplied depends on the parameter n) by passing the operator for tensor multiplication to the fold function [12]. There are other existing works that take the same approach with the Wolfram language. NumPy's einsum operator [4], Diderot's EIN operator [23], and tensor comprehensions [39] are such work. Some of them provide a syntactic construct whose appearance is similar to mathematical formulae. However, they have the same restriction on function modularization. This restriction comes from the requirement that users need to specify the indices of the result tensors (e.g., \ij" of Aij = XikYkj) for determining whether to contract a pair of identical symbolic indices. Maxima [2, 38, 35] takes a different approach from Wolfram. In Maxima, we describe formulae of tensor calculus using several special operators prepared for tensors, such as + and ·, that support tensor index notation. + is a function that sums the components of two tensors given as arguments. · is a function for tensor multiplication. It takes the tensor product of the two tensors given as arguments and takes the sum of the trace if there are pairs of a superscript and a subscript with the same index variable. Using this method enables index notation to be directly represented in a program as the mathematical expressions. However, in this method, index notation can be used only by functions that are specially prepared to use it. One of the reasons is that index rules differ for each operator. For example, + and · work in the different way for the same arguments such as Ai + Bi and Ai · Bi. Ai + Bi returns a vector, but Ai · Bi returns a scalar. Ahalander's work [8] that implements index notation on C++ takes the same approach. Array-oriented programming languages, such as APL and J, take a completely different approach. They do not use tensor index notation for representing tensor calculus. Instead, they invented a new notion, function rank [11]. Function rank specifies how to map the operators to the components of tensors. When the specified function rank is 0 for an argument matrix, the operator is mapped to each scalar component of the matrix (Ai + Bjk). When the specified function rank is 1 for an argument matrix, the operator is mapped to the rows of the matrix by regarding the matrix as a vector of vectors (Aj + Bij). When the specified function rank is 2 for an argument matrix, the operator is applied to the matrix directly (Ai + Bij). J> (2 $ 1 2) +"1 0 (2 2 $ 10 20 30 40) 11 12 21 22 31 32 Symbolical Index Reduction and Completion Rules for Importing Tensor Index Notation 3 41 42 J> (2 $ 1 2) +"1 1 (2 2 $ 10 20 30 40) 11 22 31 42 J> (2 $ 1 2) +"1 2 (2 2 $ 10 20 30 40) 11 21 32 42 A similar idea to the function rank is also imported into various programming languages and frame- works, including Wolfram [1] and NumPy [7]. However, the function rank has a limitation that it does not allow to represent an expression that requires transposition of an argument tensor: e.g., Aij + Bji (this expression requires the transposition of the matrix B). This paper shows that a combination of a set of symbolical index reduction and completion rules with scalar and tensor parameters, which is a simplified notion of function rank, enables us to use tensor index notation for arbitrary functions defined just for scalar values without requiring any additional descriptions. In our method, we do not distinguish tensor operators that handle symbolical tensor indices with the other user-defined functions, therefore we can pass these tensor operators as arguments of high-order functions. 2. Language Design for Importing Tensor Index Notation This section presents a new method for importing tensor index notation into programming languages. Briefly, it is achieved by introducing two types of parameters, scalar parameters and tensor parame- ters, and simple index reduction rules. First, we introduce scalar and tensor parameters. Second, we introduce a set of index reduction rules that is compatible with them. The combination of scalar and tensor parameters and the proposed index reduction rules enables us to apply user-defined functions to tensors using tensor index notation. Third, we introudce index completion rules for omitted ten- sor indices. By designing the index completion rules for omitted indices properly, we become able to concisely define the operators even for the differential forms [34], such as the wedge product, exterior derivative, and Hodge star operator. The method proposed in this paper has already been implemented in the Egison programming language [15]. Egison has a similar syntax to the Haskell programming language. 2.1. Scalar and Tensor Parameters Scalar and tensor parameters are a similar notion to the function rank [11]. When a scalar parameter obtains a tensor as an argument, the function is applied to each component of the tensor. In contrast, when a tensor parameter obtains a tensor as an argument, the function treats the tensor argument as a whole. We call a function that takes only scalar parameters scalar function and a function that takes only tensor parameters tensor function. For example, \+", \-", \*", and \/" should be defined as scalar functions; a function for multiplying tensors and a function for matrix determinant should be defined as tensor functions.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages13 Page
-
File Size-