Introduction to Linear Bialgebra

Total Page:16

File Type:pdf, Size:1020Kb

Introduction to Linear Bialgebra View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of New Mexico University of New Mexico UNM Digital Repository Mathematics and Statistics Faculty and Staff Publications Academic Department Resources 2005 INTRODUCTION TO LINEAR BIALGEBRA Florentin Smarandache University of New Mexico, [email protected] W.B. Vasantha Kandasamy K. Ilanthenral Follow this and additional works at: https://digitalrepository.unm.edu/math_fsp Part of the Algebra Commons, Analysis Commons, Discrete Mathematics and Combinatorics Commons, and the Other Mathematics Commons Recommended Citation Smarandache, Florentin; W.B. Vasantha Kandasamy; and K. Ilanthenral. "INTRODUCTION TO LINEAR BIALGEBRA." (2005). https://digitalrepository.unm.edu/math_fsp/232 This Book is brought to you for free and open access by the Academic Department Resources at UNM Digital Repository. It has been accepted for inclusion in Mathematics and Statistics Faculty and Staff Publications by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected], [email protected], [email protected]. INTRODUCTION TO LINEAR BIALGEBRA W. B. Vasantha Kandasamy Department of Mathematics Indian Institute of Technology, Madras Chennai – 600036, India e-mail: [email protected] web: http://mat.iitm.ac.in/~wbv Florentin Smarandache Department of Mathematics University of New Mexico Gallup, NM 87301, USA e-mail: [email protected] K. Ilanthenral Editor, Maths Tiger, Quarterly Journal Flat No.11, Mayura Park, 16, Kazhikundram Main Road, Tharamani, Chennai – 600 113, India e-mail: [email protected] HEXIS Phoenix, Arizona 2005 1 This book can be ordered in a paper bound reprint from: Books on Demand ProQuest Information & Learning (University of Microfilm International) 300 N. Zeeb Road P.O. Box 1346, Ann Arbor MI 48106-1346, USA Tel.: 1-800-521-0600 (Customer Service) http://wwwlib.umi.com/bod/ and online from: Publishing Online, Co. (Seattle, Washington State) at: http://PublishingOnline.com This book has been peer reviewed and recommended for publication by: Dr. Jean Dezert, Office National d'Etudes et de Recherches Aerospatiales (ONERA), 29, Avenue de la Division Leclerc, 92320 Chantillon, France. Dr. Iustin Priescu, Academia Technica Militaria, Bucharest, Romania, Prof. Dr. B. S. Kirangi, Department of Mathematics and Computer Science, University of Mysore, Karnataka, India. Copyright 2005 by W. B. Vasantha Kandasamy, Florentin Smarandache and K. Ilanthenral Layout by Kama Kandasamy. Cover design by Meena Kandasamy. Many books can be downloaded from: http://www.gallup.unm.edu/~smarandache/eBooks-otherformats.htm ISBN: 1931233-97-7 Standard Address Number: 297-5092 Printed in the United States of America 2 CONTENTS Preface 5 Chapter One INTRODUCTION TO LINEAR ALGEBRA AND S- LINEAR ALGEBRA 1.1 Basic properties of linear algebra 7 1.2 Introduction to s-linear algebra 15 1.3 Some aapplications of S-linear algebra 30 Chapter Two INTRODUCTORY COCEPTS OF BASIC BISTRUCTURES AND S-BISTRUCTURES 2.1 Basic concepts of bigroups and bivector spaces 37 2.2 Introduction of S-bigroups and S-bivector spaces 46 Chapter Three LINEAR BIALGEBRA, S-LINEAR BIALGEBRA AND THRIR PROPERTIES 3.1 Basic properties of linear bialgebra 51 3.2 Linear bitransformation and linear bioperators 62 3.3 Bivector spaces over finite fields 93 3.4 Representation of finite bigroup 95 3 3.5 Applications of bimatrix to bigraphs 102 3.6 Jordan biform 108 3.7 Application of bivector spaces to bicodes 113 3.8 Best biapproximation and its application 123 3.9 Markov bichains–biprocess 129 Chapter Four NEUTROSOPHIC LINEAR BIALGEBRA AND ITS APPLICATION 4.1 Some basic neutrosophic algebraic structures 131 4.2 Smarandache neutrosophic linear bialgebra 170 4.3 Smarandache representation of finite Smarandache bisemigroup 174 4.4 Smarandache Markov bichains using S- neutrosophic bivector spaces 189 4.5 Smarandache neutrosophic Leontif economic bimodels 191 Chapter Five SUGGESTED PROBLEMS 201 BIBLIOGRAPHY 223 INDEX 229 ABOUT THE AUTHORS 238 4 Preface The algebraic structure, linear algebra happens to be one of the subjects which yields itself to applications to several fields like coding or communication theory, Markov chains, representation of groups and graphs, Leontief economic models and so on. This book has for the first time, introduced a new algebraic structure called linear bialgebra, which is also a very powerful algebraic tool that can yield itself to applications. With the recent introduction of bimatrices (2005) we have ventured in this book to introduce new concepts like linear bialgebra and Smarandache neutrosophic linear bialgebra and also give the applications of these algebraic structures. It is important to mention here it is a matter of simple exercise to extend these to linear n-algebra for any n greater than 2; for n = 2 we get the linear bialgebra. This book has five chapters. In the first chapter we just introduce some basic notions of linear algebra and S- linear algebra and their applications. Chapter two introduces some new algebraic bistructures. In chapter three we introduce the notion of linear bialgebra and discuss several interesting properties about them. Also, application of linear bialgebra to bicodes is given. A remarkable part of our research in this book is the introduction of the notion of birepresentation of bigroups. The fourth chapter introduces several neutrosophic algebraic structures since they help in defining the new concept of neutrosophic linear bialgebra, neutrosophic bivector spaces, Smarandache neutrosophic linear bialgebra and Smarandache neutrosophic bivector spaces. Their 5 probable applications to real-world models are discussed. We have aimed to make this book engrossing and illustrative and supplemented it with nearly 150 examples. The final chapter gives 114 problems which will be a boon for the reader to understand and develop the subject. The main purpose of this book is to familiarize the reader with the applications of linear bialgebra to real-world problems. Finally, we express our heart-felt thanks to Dr.K.Kandasamy whose assistance and encouragement in every manner made this book possible. W.B.VASANTHA KANDASAMY FLORENTIN SMARANDACHE K. ILANTHENRAL 6 Chapter One INTRODUCTION TO LINEAR ALGEBRA AND S-LINEAR ALGEBRA In this chapter we just give a brief introduction to linear algebra and S-linear algebra and its applications. This chapter has three sections. In section one; we just recall the basic definition of linear algebra and some of the important theorems. In section two we give the definition of S-linear algebra and some of its basic properties. Section three gives a few applications of linear algebra and S-linear algebra. 1.1 Basic properties of linear algebra In this section we give the definition of linear algebra and just state the few important theorems like Cayley Hamilton theorem, Cyclic Decomposition Theorem, Generalized Cayley Hamilton Theorem and give some properties about linear algebra. DEFINITION 1.1.1: A vector space or a linear space consists of the following: i. a field F of scalars. ii. a set V of objects called vectors. 7 iii. a rule (or operation) called vector addition; which associates with each pair of vectors α, β ∈ V; α + β in V, called the sum of α and β in such a way that a. addition is commutative α + β = β + α. b. addition is associative α + (β + γ) = (α + β) + γ. c. there is a unique vector 0 in V, called the zero vector, such that α + 0 = α for all α in V. d. for each vector α in V there is a unique vector – α in V such that α + (–α) = 0. e. a rule (or operation), called scalar multiplication, which associates with each scalar c in F and a vector α in V a vector c y α in V, called the product of c and α, in such a way that 1. 1y α = α for every α in V. 2. (c1 y c2)y α = c1 y (c2y α ). 3. c y (α + β) = cy α + cy β. 4. (c1 + c2)y α = c1y α + c2y α . for α, β ∈ V and c, c1 ∈ F. It is important to note as the definition states that a vector space is a composite object consisting of a field, a set of ‘vectors’ and two operations with certain special properties. V is a linear algebra if V has a multiplicative closed binary operation ‘.’ which is associative; i.e., if v1, v2 ∈ V, v1.v2 ∈ V. The same set of vectors may be part of a number of distinct vectors. 8 We simply by default of notation just say V a vector space over the field F and call elements of V as vectors only as matter of convenience for the vectors in V may not bear much resemblance to any pre-assigned concept of vector, which the reader has. THEOREM (CAYLEY HAMILTON): Let T be a linear operator on a finite dimensional vector space V. If f is the characteristic polynomial for T, then f(T) = 0, in other words the minimal polynomial divides the characteristic polynomial for T. THEOREM: (CYCLIC DECOMPOSITION THEOREM): Let T be a linear operator on a finite dimensional vector space V and let W0 be a proper T-admissible subspace of V. There exist non-zero vectors α1, …, αr in V with respective T- annihilators p1, …, pr such that i. V = W0 ⊕ Z(α1; T) ⊕ … ⊕ Z (αr; T). ii. pt divides pt–1, t = 2, …, r. Further more the integer r and the annihilators p1, …, pr are uniquely determined by (i) and (ii) and the fact that αt is 0. THEOREM (GENERALIZED CAYLEY HAMILTON THEOREM): Let T be a linear operator on a finite dimensional vector space V. Let p and f be the minimal and characteristic polynomials for T, respectively i. p divides f. ii. p and f have the same prime factors except the multiplicities.
Recommended publications
  • On the Bicoset of a Bivector Space
    International J.Math. Combin. Vol.4 (2009), 01-08 On the Bicoset of a Bivector Space Agboola A.A.A.† and Akinola L.S.‡ † Department of Mathematics, University of Agriculture, Abeokuta, Nigeria ‡ Department of Mathematics and computer Science, Fountain University, Osogbo, Nigeria E-mail: [email protected], [email protected] Abstract: The study of bivector spaces was first intiated by Vasantha Kandasamy in [1]. The objective of this paper is to present the concept of bicoset of a bivector space and obtain some of its elementary properties. Key Words: bigroup, bivector space, bicoset, bisum, direct bisum, inner biproduct space, biprojection. AMS(2000): 20Kxx, 20L05. §1. Introduction and Preliminaries The study of bialgebraic structures is a new development in the field of abstract algebra. Some of the bialgebraic structures already developed and studied and now available in several literature include: bigroups, bisemi-groups, biloops, bigroupoids, birings, binear-rings, bisemi- rings, biseminear-rings, bivector spaces and a host of others. Since the concept of bialgebraic structure is pivoted on the union of two non-empty subsets of a given algebraic structure for example a group, the usual problem arising from the union of two substructures of such an algebraic structure which generally do not form any algebraic structure has been resolved. With this new concept, several interesting algebraic properties could be obtained which are not present in the parent algebraic structure. In [1], Vasantha Kandasamy initiated the study of bivector spaces. Further studies on bivector spaces were presented by Vasantha Kandasamy and others in [2], [4] and [5]. In the present work however, we look at the bicoset of a bivector space and obtain some of its elementary properties.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • On Free Quasigroups and Quasigroup Representations Stefanie Grace Wang Iowa State University
    Iowa State University Capstones, Theses and Graduate Theses and Dissertations Dissertations 2017 On free quasigroups and quasigroup representations Stefanie Grace Wang Iowa State University Follow this and additional works at: https://lib.dr.iastate.edu/etd Part of the Mathematics Commons Recommended Citation Wang, Stefanie Grace, "On free quasigroups and quasigroup representations" (2017). Graduate Theses and Dissertations. 16298. https://lib.dr.iastate.edu/etd/16298 This Dissertation is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Graduate Theses and Dissertations by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. On free quasigroups and quasigroup representations by Stefanie Grace Wang A dissertation submitted to the graduate faculty in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Major: Mathematics Program of Study Committee: Jonathan D.H. Smith, Major Professor Jonas Hartwig Justin Peters Yiu Tung Poon Paul Sacks The student author and the program of study committee are solely responsible for the content of this dissertation. The Graduate College will ensure this dissertation is globally accessible and will not permit alterations after a degree is conferred. Iowa State University Ames, Iowa 2017 Copyright c Stefanie Grace Wang, 2017. All rights reserved. ii DEDICATION I would like to dedicate this dissertation to the Integral Liberal Arts Program. The Program changed my life, and I am forever grateful. It is as Aristotle said, \All men by nature desire to know." And Montaigne was certainly correct as well when he said, \There is a plague on Man: his opinion that he knows something." iii TABLE OF CONTENTS LIST OF TABLES .
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Linear Algebra for Dummies
    Linear Algebra for Dummies Jorge A. Menendez October 6, 2017 Contents 1 Matrices and Vectors1 2 Matrix Multiplication2 3 Matrix Inverse, Pseudo-inverse4 4 Outer products 5 5 Inner Products 5 6 Example: Linear Regression7 7 Eigenstuff 8 8 Example: Covariance Matrices 11 9 Example: PCA 12 10 Useful resources 12 1 Matrices and Vectors An m × n matrix is simply an array of numbers: 2 3 a11 a12 : : : a1n 6 a21 a22 : : : a2n 7 A = 6 7 6 . 7 4 . 5 am1 am2 : : : amn where we define the indexing Aij = aij to designate the component in the ith row and jth column of A. The transpose of a matrix is obtained by flipping the rows with the columns: 2 3 a11 a21 : : : an1 6 a12 a22 : : : an2 7 AT = 6 7 6 . 7 4 . 5 a1m a2m : : : anm T which evidently is now an n × m matrix, with components Aij = Aji = aji. In other words, the transpose is obtained by simply flipping the row and column indeces. One particularly important matrix is called the identity matrix, which is composed of 1’s on the diagonal and 0’s everywhere else: 21 0 ::: 03 60 1 ::: 07 6 7 6. .. .7 4. .5 0 0 ::: 1 1 It is called the identity matrix because the product of any matrix with the identity matrix is identical to itself: AI = A In other words, I is the equivalent of the number 1 for matrices. For our purposes, a vector can simply be thought of as a matrix with one column1: 2 3 a1 6a2 7 a = 6 7 6 .
    [Show full text]
  • Algebra of Linear Transformations and Matrices Math 130 Linear Algebra
    Then the two compositions are 0 −1 1 0 0 1 BA = = 1 0 0 −1 1 0 Algebra of linear transformations and 1 0 0 −1 0 −1 AB = = matrices 0 −1 1 0 −1 0 Math 130 Linear Algebra D Joyce, Fall 2013 The products aren't the same. You can perform these on physical objects. Take We've looked at the operations of addition and a book. First rotate it 90◦ then flip it over. Start scalar multiplication on linear transformations and again but flip first then rotate 90◦. The book ends used them to define addition and scalar multipli- up in different orientations. cation on matrices. For a given basis β on V and another basis γ on W , we have an isomorphism Matrix multiplication is associative. Al- γ ' φβ : Hom(V; W ) ! Mm×n of vector spaces which though it's not commutative, it is associative. assigns to a linear transformation T : V ! W its That's because it corresponds to composition of γ standard matrix [T ]β. functions, and that's associative. Given any three We also have matrix multiplication which corre- functions f, g, and h, we'll show (f ◦ g) ◦ h = sponds to composition of linear transformations. If f ◦ (g ◦ h) by showing the two sides have the same A is the standard matrix for a transformation S, values for all x. and B is the standard matrix for a transformation T , then we defined multiplication of matrices so ((f ◦ g) ◦ h)(x) = (f ◦ g)(h(x)) = f(g(h(x))) that the product AB is be the standard matrix for S ◦ T .
    [Show full text]
  • Determinants Linear Algebra MATH 2010
    Determinants Linear Algebra MATH 2010 • Determinants can be used for a lot of different applications. We will look at a few of these later. First of all, however, let's talk about how to compute a determinant. • Notation: The determinant of a square matrix A is denoted by jAj or det(A). • Determinant of a 2x2 matrix: Let A be a 2x2 matrix a b A = c d Then the determinant of A is given by jAj = ad − bc Recall, that this is the same formula used in calculating the inverse of A: 1 d −b 1 d −b A−1 = = ad − bc −c a jAj −c a if and only if ad − bc = jAj 6= 0. We will later see that if the determinant of any square matrix A 6= 0, then A is invertible or nonsingular. • Terminology: For larger matrices, we need to use cofactor expansion to find the determinant of A. First of all, let's define a few terms: { Minor: A minor, Mij, of the element aij is the determinant of the matrix obtained by deleting the ith row and jth column. ∗ Example: Let 2 −3 4 2 3 A = 4 6 3 1 5 4 −7 −8 Then to find M11, look at element a11 = −3. Delete the entire column and row that corre- sponds to a11 = −3, see the image below. Then M11 is the determinant of the remaining matrix, i.e., 3 1 M11 = = −8(3) − (−7)(1) = −17: −7 −8 ∗ Example: Similarly, to find M22 can be found looking at the element a22 and deleting the same row and column where this element is found, i.e., deleting the second row, second column: Then −3 2 M22 = = −8(−3) − 4(−2) = 16: 4 −8 { Cofactor: The cofactor, Cij is given by i+j Cij = (−1) Mij: Basically, the cofactor is either Mij or −Mij where the sign depends on the location of the element in the matrix.
    [Show full text]
  • Bases for Infinite Dimensional Vector Spaces Math 513 Linear Algebra Supplement
    BASES FOR INFINITE DIMENSIONAL VECTOR SPACES MATH 513 LINEAR ALGEBRA SUPPLEMENT Professor Karen E. Smith We have proven that every finitely generated vector space has a basis. But what about vector spaces that are not finitely generated, such as the space of all continuous real valued functions on the interval [0; 1]? Does such a vector space have a basis? By definition, a basis for a vector space V is a linearly independent set which generates V . But we must be careful what we mean by linear combinations from an infinite set of vectors. The definition of a vector space gives us a rule for adding two vectors, but not for adding together infinitely many vectors. By successive additions, such as (v1 + v2) + v3, it makes sense to add any finite set of vectors, but in general, there is no way to ascribe meaning to an infinite sum of vectors in a vector space. Therefore, when we say that a vector space V is generated by or spanned by an infinite set of vectors fv1; v2;::: g, we mean that each vector v in V is a finite linear combination λi1 vi1 + ··· + λin vin of the vi's. Likewise, an infinite set of vectors fv1; v2;::: g is said to be linearly independent if the only finite linear combination of the vi's that is zero is the trivial linear combination. So a set fv1; v2; v3;:::; g is a basis for V if and only if every element of V can be be written in a unique way as a finite linear combination of elements from the set.
    [Show full text]
  • QUIVER BIALGEBRAS and MONOIDAL CATEGORIES 3 K N ≥ 0
    QUIVER BIALGEBRAS AND MONOIDAL CATEGORIES HUA-LIN HUANG (JINAN) AND BLAS TORRECILLAS (ALMER´IA) Abstract. We study the bialgebra structures on quiver coalgebras and the monoidal structures on the categories of locally nilpotent and locally finite quiver representations. It is shown that the path coalgebra of an arbitrary quiver admits natural bialgebra structures. This endows the category of locally nilpotent and locally finite representations of an arbitrary quiver with natural monoidal structures from bialgebras. We also obtain theorems of Gabriel type for pointed bialgebras and hereditary finite pointed monoidal categories. 1. Introduction This paper is devoted to the study of natural bialgebra structures on the path coalgebra of an arbitrary quiver and monoidal structures on the category of its locally nilpotent and locally finite representations. A further purpose is to establish a quiver setting for general pointed bialgebras and pointed monoidal categories. Our original motivation is to extend the Hopf quiver theory [4, 7, 8, 12, 13, 25, 31] to the setting of generalized Hopf structures. As bialgebras are a fundamental generalization of Hopf algebras, we naturally initiate our study from this case. The basic problem is to determine what kind of quivers can give rise to bialgebra structures on their associated path algebras or coalgebras. It turns out that the path coalgebra of an arbitrary quiver admits natural bial- gebra structures, see Theorem 3.2. This seems a bit surprising at first sight by comparison with the Hopf case given in [8], where Cibils and Rosso showed that the path coalgebra of a quiver Q admits a Hopf algebra structure if and only if Q is a Hopf quiver which is very special.
    [Show full text]
  • Arxiv:2011.04704V2 [Cs.LO] 22 Mar 2021 Eain,Bttre Te Opttoal Neetn Oessuc Models Interesting Well
    Domain Semirings United Uli Fahrenberg1, Christian Johansen2, Georg Struth3, and Krzysztof Ziemia´nski4 1Ecole´ Polytechnique, Palaiseau, France 2 University of Oslo, Norway 3University of Sheffield, UK 4University of Warsaw, Poland Abstract Domain operations on semirings have been axiomatised in two different ways: by a map from an additively idempotent semiring into a boolean subalgebra of the semiring bounded by the additive and multiplicative unit of the semiring, or by an endofunction on a semiring that induces a distributive lattice bounded by the two units as its image. This note presents classes of semirings where these approaches coincide. Keywords: semirings, quantales, domain operations 1 Introduction Domain semirings and Kleene algebras with domain [DMS06, DS11] yield particularly simple program ver- ification formalisms in the style of dynamic logics, algebras of predicate transformers or boolean algebras with operators (which are all related). There are two kinds of axiomatisation. Both are inspired by properties of the domain operation on binary relations, but target other computationally interesting models such as program traces or paths on digraphs as well. The initial two-sorted axiomatisation [DMS06] models the domain operation as a map d : S → B from an additively idempotent semiring (S, +, ·, 0, 1) into a boolean subalgebra B of S bounded by 0 and 1. This seems natural as domain elements form powerset algebras in the target models mentioned. Yet the domain algebra B cannot be chosen freely: B must be the maximal boolean subalgebra of S bounded by 0 and 1 and equal to the set Sd of fixpoints of d in S. The alternative, one-sorted axiomatisation [DS11] therefore models d as an endofunction on a semiring S that induces a suitable domain algebra on Sd—yet generally only a bounded distributive lattice.
    [Show full text]
  • Determinants Math 122 Calculus III D Joyce, Fall 2012
    Determinants Math 122 Calculus III D Joyce, Fall 2012 What they are. A determinant is a value associated to a square array of numbers, that square array being called a square matrix. For example, here are determinants of a general 2 × 2 matrix and a general 3 × 3 matrix. a b = ad − bc: c d a b c d e f = aei + bfg + cdh − ceg − afh − bdi: g h i The determinant of a matrix A is usually denoted jAj or det (A). You can think of the rows of the determinant as being vectors. For the 3×3 matrix above, the vectors are u = (a; b; c), v = (d; e; f), and w = (g; h; i). Then the determinant is a value associated to n vectors in Rn. There's a general definition for n×n determinants. It's a particular signed sum of products of n entries in the matrix where each product is of one entry in each row and column. The two ways you can choose one entry in each row and column of the 2 × 2 matrix give you the two products ad and bc. There are six ways of chosing one entry in each row and column in a 3 × 3 matrix, and generally, there are n! ways in an n × n matrix. Thus, the determinant of a 4 × 4 matrix is the signed sum of 24, which is 4!, terms. In this general definition, half the terms are taken positively and half negatively. In class, we briefly saw how the signs are determined by permutations.
    [Show full text]
  • Characterization of Hopf Quasigroups
    Characterization of Hopf Quasigroups Wei WANG and Shuanhong WANG ∗ School of Mathematics, Southeast University, Jiangsu Nanjing 210096, China E-mail: [email protected], [email protected] Abstract. In this paper, we first discuss some properties of the Galois linear maps. We provide some equivalent conditions for Hopf algebras and Hopf (co)quasigroups as its applications. Then let H be a Hopf quasigroup with bijective antipode and G be the set of all Hopf quasigroup automorphisms of H. We introduce a new category CH(α, β) with α, β ∈ G over H and construct a new braided π-category C (H) with all the categories CH (α, β) as components. Key words: Galois linear map; Antipode; Hopf (co)quasigroup; Braided π-category. Mathematics Subject Classification 2010: 16T05. 1. Introduction The most well-known examples of Hopf algebras are the linear spans of (arbitrary) groups. Dually, also the vector space of linear functionals on a finite group carries the structure of a Hopf algebra. In the case of quasigroups (nonassociative groups)(see [A]) however, it is no longer a Hopf algebra but, more generally, a Hopf quasigroup. In 2010, Klim and Majid in [KM] introduced the notion of a Hopf quasigroup which was arXiv:1902.10141v1 [math.QA] 26 Feb 2019 not only devoted to the development of this missing link: Hopf quasigroup but also to under- stand the structure and relevant properties of the algebraic 7-sphere, here Hopf quasigroups are not associative but the lack of this property is compensated by some axioms involving the antipode S. This mathematical object was studied in [FW] for twisted action as a gen- eralization of Hopf algebras actions.
    [Show full text]