arXiv:1605.09523v4 [math.GR] 19 Sep 2017 arxadto oacaso w arcswt ieetdime different with matrices two of class a to gener addition the Then matrix investigated. are properties and structures ude ieeta aiod v)bnldLegopadLi and space group vector Lie a bundled to form (iii) (vi) (distance); manifold; norm differential and bundled product inner (ii) stru structure; analytic th and geometric, equivalence, algebraic, each many Under equivalence, equivalence. each equivalenc for vector tablished and (M-equivalence) equivalence matrix e words: Key problems. practical to approach matrix using M/-,ltie oooy brbnl,bnldmanifold bundled bundle, fiber topology, lattice, (M-/V-), sabuddoeao ntncsaiysur u nldss defined. includes are but etc. square eigenvector necessarily and eigenvalue (not function, operator bounded a is rdc otoabtaymtie.UdrSPtesto matri of (STP), set the product STP semi-tensor Under matrices. the arbitrary two called to product, product matrix new A Abstract orsodn uhr aza hn.Tl:+61 8254 10 +86 Tel.: Cheng. 1232. Daizhan 61733018. and author: 61773371 Corresponding Grants under China of Foundation ⋆ Preliminaries I. as given is contents of list a follows. read, in convenience For Contents 1.1 Preliminaries 1 . Symbols 1.3 Introduction 1.2 Contents 1.1 oiae ySPo arcs w id feuvlne fma of equivalences of kinds two matrices, of STP by Motivated noewr,ti e arxter vroe h dimensiona the overcomes theory matrix new this word, one In hswr sspotdprl yNtoa aua Science Natural National by partly supported is work This e aoaoyo ytm n oto,AS,CieeAcadem Chinese AMSS, Control, and Systems of Laboratory Key eitno rdc/diinSPSA,vco product/ vector product/addition(STP/STA), Semi-tensor V n matrix a and , nEuvlneo Matrices of Equivalence On A fabtaydmnini osdrda noeao (linear operator an as considered is dimension arbitrary of aza Cheng Daizhan trshv enpsdt h utetsae hc nld ( include which space, quotient the to posed been have ctures ooy i)afie udesrcue aldtedsrt bu discrete the called structure, bundle fiber a (iv) pology; lzdmti diini loitoue,wihetnsth extends which introduced, also is addition matrix alized LeagbaLegroup(BM/BLA/BLG). algebra/Lie /Lie orsodn utetsaebcmsavco pc.Unde space. vector a becomes space quotient corresponding e Veuvlne epciey h atc tutr has structure lattice The respectively. (V-equivalence) e ur arcsa pca ae,tegnrlzdcharacte generalized the case), special a as matrices quare ler.UdrVeuvlne etr fdffrn dimens different of vectors V-equivalence, Under algebra. e nsions. e eoe ood(eigopwt dniy.Sm relat Some identity). with (semi-group monoid a becomes ces are ncransne tpoie uhmr reo for freedom more much provides It sense. certain in barrier l sbifl eiwd h T xed h lsia matrix classical the extends STP The reviewed. briefly is I -qiaec n atc Structure Lattice and M-equivalence II. I.Tplg nMeuvlneSpace M-equivalence on Topology III. . Σ 3.5 . eitno diinadVco pc Structure Space Vector and Addition Semi-tensor 2.6 of Structure Group Monoid 2.5 Quotient and Monoid 2.4 on Structure Lattice 2.3 Matrices of M-Equivalence 2.2 Matrices of STP 2.1 . ne rdc on Product Inner 3.4 on Frame Coordinate on 3.3 Structure Bundle Fiber 3.2 Sub-basis via Topology 3.1 rcs(nldn etr)aeitoue,wihaecalle are which introduced, are vectors) (including trices fΣ of diinV/A,mti/etrequivalence matrix/vector addition(VP/VA), µ fSine,Biig109,P.R.China 100190, Beijing Sciences, of y saMti Space Matric a as µ ⋆ M M µ M M µ µ µ apn)on mapping) M µ V When . classical e de (v) ndle; )lattice i) enes- been ristic M- r ions ed A d 3.6 Sub-space of Σµ A m n and B p q. Then the “product”, A B, ∈M × ∈M × × is well posed, if and only if, n = p; the “addition” A+B, IV. Differential Structure on M-equivalence Space is defined, if and only if, m = p and n = q. Though there are some other matrix products such as Kronecker prod- 4.1 Bundled Manifold uct, Hadamard product etc., but they are of different r 4.2 C Functions on Σµ stories [24]. 4.3 Generalized Inner Product 4.4 Vector Fields The main purpose of this paper is to develop a new ma- 4.5 Integral Curves trix theory, which intends to overcome the dimension 4.6 Forms barrier by extending the matrix product and matrix ad- 4.7 Tensor Fields dition to two matrices which do not meet the classical di- mension requirement. As generalizations of the classical V. Lie Algebra Structure on Square M-equivalence Space ones, they should be consistent with the classical defini- tions. That is, when the dimension requirements of two 5.1 Lie Algebra on Square M-equivalence Space argument matrices in classical matrix theory are satis- 5.2 Bundled Lie Algebra fied, the newly defined operators should coincide with 5.3 Bundled Lie Sub-algebra the original one. 5.4 Further Properties of gl(F) Because of the extension of the two fundamental oper- VI. Lie Group on Nonsingular M-equivalence Space ators, many related concepts can be extended. For in- stance, the characteristic functions, the eigenvalues and 6.1 Bundled Lie group eigenvectors of a square matrix can be extended to cer- 6.2 Relationship with gl(F) tain non-square matrices; Lie algebraic structure can 6.3 Lie Subgroup of GL(F) also be extended to dimension-varying square matrices. 6.4 Symmetric Group All these extensions should be consistent with the clas- sical ones. In one word, we are developing the classical VII. V-equivalence matrix theory but not violating any of the original ma- trix theory. 7.1 Equivalence of Vectors of Different Dimensions 7.2 Vector Space Structure on V-equivalence Space When we were working on generalizing the fundamen- 7.3 Inner Product and Linear Mappings tal matrix operators we meet a serious problem: Though 7.4 Type-1 Invariant Subspace the extended operators are applicable to certain sets of 7.5 Type-2 Invariant Subspace matrices with different dimensions, they fail to be vec- 7.6 Higher Order Linear Mapping tor space anymore. This drawback is not acceptable, be- 7.7 Invariant Subspace on V-equivalence Space cause it blocked the way to extend many nice algebraic or 7.8 Generalized Linear System geometric structures in matrix theory, such as Lie alge- braic structure, manifold structure etc., to the enlarged VIII. Concluding Remarks set, which includes matrices of different dimensions. To overcome this obstacle, we eventually introduced cer- 1.2 Introduction tain equivalence relations. Then the quotient spaces, called the equivalence spaces, become vector spaces. Two Matrix theory and calculus are two classical and funda- equivalence relations have been proposed. They are ma- mental mathematical tools in modern science and tech- trix equivalence (M-equivalence) and vector equivalence nology. There are two mostly used operators on the set (V-equivalence). of matrices: conventional matrix product and matrix ad- dition. Roughly speaking, the object of matrix theory is Then many nice algebraic, analytic, and geometric struc- ( , +, ), where is the set of all matrices. Unfortu- tures have been developed on the M-equivalence spaces. M × M nately, unlike the arithmetic system (R, +, ), in matrix They are briefly introduced as follows: × theory both “+” and “ ” are restricted by the matrix × dimensions. Precisely speaking: consider two matrices Lattice structure: The elements in each equivalent •
2 class form a lattice. The class of spaces with different Newton differential dynamic systems and their control dimensions also form a lattice. The former and the problems [4], [51], [40]. A basic summarization was given latter are homomorphic. The lattices obtained for M- in [5]. equivalence and V-equivalence are isomorphic. Topological structure: A topological structure is pro- Since 2008, STP has been used to formulate and ana- • posed to the M-equivalence space, which is built by lyze Boolean networks as well as general logical dynamic using topological sub-base. It is then proved that un- systems, and to solve control design problems for those der this topology the equivalence space is Hausdorff systems. It has then been developed rapidly. This is wit- (T2) space and is second countable. nessed by hundreds of research papers. A multitude of Inner product structure: An inner product is proposed applications of STP include (i) logical dynamic systems • on the M-equivalence space. The norm (distance) is [6], [18], [32]; (ii) systems biology [54], [20]; (iii) graph also obtained. It is proved that the topology induced theory and formation control [45], [55]; (iv) circuit de- by this norm is the same as the topology produced by sign and failure detection [33], [34], [9]; (v) game theory using the topological sub-base. [21], [10], [11]; (vi) finite automata and symbolic dynam- Fiber bundle structure. A fiber bundle structure is ics [50], [57], [23]; (vii) coding and cryptography [58], • proposed for the set of matrices (as total space) and [56]; (viii) fuzzy control [8], [17]; (ix) some engineering the equivalent classes (as base space). The bundle applications [49], [36]; and many other topics [7], [38], structure is named the discrete bundle, because each [52], [59], [60], [37]; just to name a few. fiber has discrete topology. Bundled manifold structure: A (dimension-varying) As a generalization of conventional matrix product, STP • manifold structure is proposed for the M-equivalence is applicable to two matrices of arbitrary dimensions. In space. Its coordinate charts are constructed via the addition, this generalization keeps all fundamental prop- discrete bundle. Hence it is called a bundled manifold. erties of conventional matrix product available. There- Bundled Lie algebraic structure: A Lie algebra struc- fore, it becomes a very conventional tool for investigat- • ture is proposed for the M-equivalence space. The Lie ing many matrix expression related problems. algebra is of infinite dimensional, but almost all the properties of finite dimensional Lie algebras remain Recently, the journal IET Control Theory & Applica- available. tions has published a special issue “Recent Develop- Bundled Lie group structure: For the M-equivalence ments in Logical Networks and Its Applications”. It pro- • classes of square nonsingular matrices a group struc- vides many up-to-date results of STP and its applica- ture is proposed. It has also the dimension-varying tions. Particularly, we refer to a survey paper [39] in this manifold structure. Both the algebraic and the geo- special issue for a general introduction to STP with ap- metric structures are consistent and hence it becomes plications. a Lie group. The relationship of this Lie group with the bundled Lie algebra is also investigated. Up to this time, the main effort has been put on its appli- cations. Now when we start to explore the mathematical Under V-equivalence, all the vectors of varying dimen- foundation of STP, we found that the most significant sions form a vector space , and any matrix A can be characteristic of STP is that it overcomes the dimen- V considered as a linear operator on . A very important sion barrier. After serious thought, it can be seen that V class of A, called the bounded operator, is investigated in fact STP is defined on equivalent classes. Following in detail. For a bounded operator A, which could be non- this thought of train, the matrix theory on equivalence square, its characteristic function is proposed. Its eigen- space emerges. The outline of this new matrix theory is values and the corresponding eigenvectors are obtained. presented in this manuscript. The results in this paper A generalized A-invariant subspace has been discussed are totally new except the concepts and basic properties in detail. of STP, which are presented in subsection 2.1.
This work is motivated by the semi-tensor product 1.3 Symbols (STP). The STP of matrices was proposed firstly and formally in 2001 [3]. Then it has been used to some For statement ease, we first give some notations:
3 (1) N: Set of natural numbers (i.e., N = 1, 2, ); (24) The set of all matrices: { ···} (2) Z: set of integers; ∞ ∞ (3) Q: Set of rational numbers, (Q+: Set of positive = i j . M M × rational numbers); i=1 j=1 (4) R: Field of real numbers; [ [ (5) C: Field of complex numbers; (25) (6) F: certain field of characteristic number 0 (Partic- ularly, we can understand F = R or F = C). q := A column number of A is q . ·× F M ∈M (7) m n: setof m n dimensional matrices over field M × × F. When the field F is obvious or does not affect the (26) The set of matrices: discussion, the superscript F can be omitted, and µ := A Mm n m/n = µ as a default: F = R can be assumed. M ∈ × (8) Col(A) (Row(A)): the set of columns (rows) of A; (27) Lattice homomorphism: Coli(A) (Rowi(A)): the i-th column (row) of A. ≈ (28) Lattice isomorphism: ≅ (9) k = 1, 2, , k , := 2; Di { ··· } D D (29) Vector order: 6 (10) δn: the i-th column of the identity matrix In; i (30) Vector space order: (11) ∆n = δn i =1, 2, ,n ; ⊑ { | ··· } (31) Matrix order: (12) L m r is a logical matrix, if Col(L) ∆m. ≺ ∈ M × ⊂ ⊏ The set of m r logical matrices is denoted as (32) Matrix space order: × (33) The overall matrix quotient space: m r; × L i1 i2 ir (13) Assume A m r. Then L = δm,δm, ,δm . It ∈ L × ··· Σ = / . is briefly denoted as M M ∼ (34) The µ-matrix quotient space: L = δ [i ,i , ,i ]. m 1 2 ··· r Σµ := µ/ . (14) Let A = (ai,j ) m n, and ai,j 0, 1 , i, j. M ∼ ∈ M × ∈ { } ∀ Then A is called a Boolean matrix. Denote the set (35) The µ = 1-matrix quotient space: of m n dimensional Boolean matrices by m n. × B × (15) Set of probabilistic vectors: Σ := / . M1 ∼ k (36) The set of all vectors: Υ := (r , r , , r )T r 0, r =1 . k 1 2 ··· k i ≥ i ( i=1 ) X ∞ = , (Note that real Rn). V Vn Vn ∼ (16) Set of probabilistic matrices: n=1 [ (37) The vector quotient space under V-equivalence: Υm n := M m n Col(M) Υm . × ∈M × ⊂ Ω := / . (17) a b: a divides b. V V ↔ | (18) m n = gcd(m,n): The greatest common divisor ∧ (38) The vector quotient subspace under V-equivalence: of m,n. (19) m n = lcm(m,n): The least common multiple of i [i, ] ∨ Ω := · / . m,n. V V ↔ (20) ⋉ (⋊): The left (right) semi-tensor product of ma- (39) The matrix quotient space under V-equivalence: trices. (21) ⋉~ (⋊~ ): The left (right) vector product of matrices. Ω := / . (22) ( , ): The M-equivalence ((left, right) matrix M M ↔ ∼ ∼ℓ ∼r equivalence). (40) The matrix quotient subspace under V-equivalence: (23) ( ℓ, r): The V-equivalence ((left,right) vector ↔ ↔ ↔ n equivalence). Ω := n/ . M M·× ↔
4 (41) Given a (Cr) manifold M (r could be or ω), Associativity and distribution are two fundamental ∞ its tangent space is T (M); properties of conventional matrix product. When the • its cotangent space is T ∗(M); product is generalized to STP, these two properties • the set of Cr functions is Cr(M); remain available. • the set of vector fields is V r(M); the set of co- • r vector fields is V ∗ (M). Proposition 3 1. (Distributive Law) the set of tensor fields on M with covariant order α • and contravariant order β is Tα(M); when β =0 F ⋉ (aG bH)= aF ⋉ G bF ⋉ H, β ± ± α it becomes T (M). ((aF bG) ⋉ H = aF ⋉ H bG ⋉ H, a,b F. ± ± ∈ (4) 2 M-equivalence and Lattice Structure 2. (Associative Law)
2.1 STP of Matrices (F ⋉ G) ⋉ H = F ⋉ (G ⋉ H). (5)
This section gives a brief review on STP. We refer to [5], [6], [7] for details. The following proposition is inherited from the conven- tional matrix product.
Definition 1 Let A m n, B p q, and t = n p ∈M × ∈M × ∨ be the least common multiple of n and p. Then Proposition 4 1.
T T T (1) the left STP of A and B, denoted by A⋉B, is defined (A ⋉ B) = B ⋉ A . (6) as 2. Assume A and B are invertible, then
A ⋉ B := A It/n B It/p , (1) 1 1 1 ⊗ ⊗ (A ⋉ B)− = B− ⋉ A− . (7) where is the Kronecker product [24]; ⊗ (2) the right STP of A and B is defined as The following proposition shows that the STP has cer- tain commutative property. A ⋊ B := I A I B . (2) t/n ⊗ t/p ⊗ Proposition 5 Given A m n. In the following we mainly discuss the left STP, and ∈M × briefly call the left STP as STP. Most of the properties 1. Let Z Rt be a column vector. Then of left STP have their corresponding ones for right STP. ∈ Please also refer to [5] or [6] for their major differences. ZA = (I A)Z. (8) t ⊗ Remark 2 If n = p, both left and right STP defined in 2. Let Z Rt be a row vector. Then ∈ Definition 1 degenerate to the conventional matrix prod- AZ = Z(I A). (9) uct. That is, STP is a generalization of the conventional t ⊗ matrix product. Hence, as a default, in most cases the symbol ⋉ can be omitted (but not ⋊). That is, unless To explore further commutating properties, we intro- elsewhere stated throughout this paper duce a swap matrix.
Definition 6 ([24]) A swap matrix W[m,n] mn mn AB := A ⋉ B. (3) ∈M × is defined as follows:
The following proposition shows that this generalization W[m,n] = δmn[1,m +1, , (n 1)m + 1; ··· − not only keeps the main properties of conventional matrix 2,m +2, , (n 1)m + 2; (10) product available, but also adds some new properties such ··· − as certain commutativity. ; m, 2m, ,nm]. ··· ···
5 The following proposition shows that the swap matrix is where W = W[n,k]. Then orthogonal. B W −1(I A)W e = e k ⊗ Proposition 7 1 diag(A,A, ,A) = W − e ··· W
T 1 1 A A A W[m,n] = W[−m,n] = W[n,m]. (11) = W − diag(e ,e , ,e )W ··· 1 A A = W − I e W = e I . k ⊗ ⊗ k The fundamental function of the swap matrix is to Remark 12 Comparing the product of numbers with the “swap” two factors. product of matrices, two major differences are (i) matrix product has dimension restriction while the scalar product Proposition 8 1. Let X Rm, Y Rn be two col- ∈ ∈ has no restriction; (ii) the matrix product is not commu- umn vectors. Then tative while the scalar product is. When the conventional matrix product is extended to STP, these two weaknesses W[m,n] ⋉ X ⋉ Y = Y ⋉ X. (12) have been eliminated in certain degree. First, the dimen- sion restriction has been removed. Second, in addition to 2. Let X Rm, Y Rn be two row vectors. Then ∈ ∈ Proposition 5, which shows certain commutativity, the use of swap matrix also provides certain commutativity X ⋉ Y ⋉ W[m,n] = Y ⋉ X. (13) property. All these improvements make the STP more convenient in use than the conventional matrix product.
The swap matrix can also swap two factor matrices of 2.2 M-equivalence of Matrices the Kronecker product [5], [47]. The set of all matrices (over certain field F) is denoted Proposition 9 Let A m n and B p q. Then by , that is ∈M × ∈M × M
W[m,p](A B)W[q,n] = B A. (14) ∞ ∞ ⊗ ⊗ = m n. M M × m=1 n=1 [ [ Remark 10 Assume A n n and B p p are ∈ M × ∈ M × square matrices. Then (14) becomes It isobviousthat STP is anoperatordefined as ⋉(or ⋊): . Observing the definition of STP carefully, W[n,p](A B)W[p,n] = B A. (15) M×M→ M ⊗ ⊗ it is not difficult to find that when we use STP to multiply As a consequence, A B and B A are similar. A with B, instead of A itself, we modify A by Kronecker ⊗ ⊗ multiplying different sizes of identity to multiply differ- The following example is an application of Proposition ent B’s. In fact, STP multiplies an equivalent class of A, precisely, A = A, A I , A I , , with an equiva- 9. h i { ⊗ 2 ⊗ 3 ···} lent class of B, that is, B = B, B I , B I , . h i { ⊗ 2 ⊗ 3 ···} Example 11 Prove Motivated by this fact, we first propose an equivalence over set of matrices, called the matrix equivalence ( ℓ A Ik A e ⊗ = e Ik. (16) ∼ ⊗ or r). Then STP can be considered as an operator over ∼ the equivalent classes. We give a rigorous definition for the equivalence. Assume A n n is a square matrix and B = A Ik. ∈M × ⊗ Note that Definition 13 Let A, B be two matrices. ∈M 1 W (A I )W − = I A = diag (A, A, , A), (1) A and B are said to be left matrix equivalent (LME), ⊗ k k ⊗ ··· denoted by A B, if there exist two identity ma- k ∼ℓ | {z } 6 trices I , I , s,t N, such that Note that α and β are co-prime. Comparing the entries s t ∈ of both sides of (19), it is clear that (i) the diagonal A Is = B It. elements of all A are the same; (ii) all other elements ⊗ ⊗ ii (A , j = i) are zero. Hence A = b I . Similarly, we ij 6 11 β (2) A and B are said to be right matrix equivalent have B = a11Iα. But (17) requires a11 = b11, which is (RME), denoted by A r B, if there exist two the required λ. The conclusion follows. ✷ ∼ identity matrices I , I , s,t N, such that s t ∈ Theorem 17 (1) If A ℓ B, then there exists a matrix I A = I B. ∼ s ⊗ t ⊗ Λ such that
Remark 14 It is easy to verify that the LME ℓ (simi- ∼ A =Λ Iβ, B =Λ Iα. (20) larly, RME r) is an equivalence relation. That is, it is (i) ⊗ ⊗ ∼ self-reflexive (A A); (ii) symmetric (if A B, then ∼ℓ ∼ℓ B A); and (iii) transitive (if A B, and B C, (2) In each class A ℓ there exists a unique A1 A ℓ, ∼ℓ ∼ℓ ∼ℓ h i ∈ h i then A C). such that A1 is left irreducible. ∼ℓ Definition 15 Given A . Proof. ∈M (1) The left equivalent class of A is denoted by (1) Assume A B, that is, there exist I and I such ∼ℓ α β that A := B B A ; h iℓ { | ∼ℓ } A Iα = B Iβ. (21) (2) The right equivalent class of A is denoted by ⊗ ⊗ Without loss of generality, we assume α and β are A := B B r A . h ir { | ∼ } co-prime. Otherwise, assume their greatest com- mon divisor is r = α β, the α and β in (21) can (3) A is left (right) reducible, if there is an Is, s 2, and ∧ ≥ be replaced by α/r and β/r respectively. a matrix B, such that A = B Is (correspondingly, ⊗ Assume A m n and B p q. Then A = Is B). Otherwise, A is left (right) irreducible. ∈M × ∈M × ⊗
Lemma 16 Assume A β β and B α α, where mα = pβ, nα = qβ. ∈M × ∈M × α, β N, α and β are co-prime, and ∈ Since α and β are co-prime, we have A I = B I . (17) ⊗ α ⊗ β m = sβ, n = tβ, p = sα, q = tα. Then there exists a λ F such that ∈ Split A and B into block forms as A = λIβ , B = λIα. (18)
A A B B 11 ··· 1t 11 ··· 1t Proof. Split A Iα into equal size blocks as . . ⊗ A = . , B = . , A11 A1α As1 Ast Bs1 Bst ··· ··· ··· . A Iα = . ⊗ where Ai,j β β and Bi,j α α, i = ∈ M × ∈ M × Aα1 Aαα 1, ,s, j =1, ,t. Now (21) is equivalent to ··· ··· ··· where Aij β β, i, j =1, , α. Then we have Aij Iα = Bij Iβ, i, j. (22) ∈M × ··· ⊗ ⊗ ∀
Ai,j = bi,jIβ . (19) According to Lemma 16, we have Aij = λij Iβ and
7 Bij = λij Iα. Define Θ
λ λ 11 ··· 1t . Λ := . , A B λs1 λst ∼ ··· equation (20) follows. (2) For each A A we can find A irreducible such ∈ h iℓ 0 that A = A I . To prove it is unique, let B A 0 ⊗ s ∈ h iℓ and B is irreducible and B = B I . We claim Λ 0 0 ⊗ t that A = B . Since A B , there exists Γ such 0 0 0 ∼ℓ 0 Fig. 1. Θ = lcm(A, B) and Λ= gcd(A, B) that A0 =Γ Ip, B0 =Γ Iq. Definition 20 (1) Let A n n. Then a modified ⊗ ⊗ ∈ M × determinant is defined as Since both A0 and B0 are irreducible, we have p = q = 1, which proves the claim. Dt(A) = [ det(A) ]1/n. (25) | | ✷ (2) Consider an equivalence of square matrices A , the h i Remark 18 Theorem 17 is also true for r with obvious “determinant” of A is defined as ∼ h i modification. Dt( A ) = Dt(A), A A . (26) Remark 19 For statement ease, we propose the follow- h i ∈ h i ing terminologies: Proposition 21 (26) is well defined, i.e., it is indepen- dent of the choice of the representative A. (1) If A = B I , then B is called a divisor of A and ⊗ s A is called a multiple of B. Proof. To see (26) is well defined, we need to check that (2) If (21) holds and α, β are co-prime, then the Λ sat- A B implies Dt(A) = Dt(B). Now assume A =Λ Iβ, isfying (20) is called the greatest common divisor of ∼ ⊗ B =Λ Iα and Λ k k, then A and B. Moreover, Λ= gcd(A, B) is unique. ⊗ ∈M × (3) If (21) holds and α, β are co-prime, then Dt(A) = [ det (Λ I ) ]1/kβ = [ det(Λ) ]1/k, | ⊗ β | | |
Θ := A Iα = B Iβ (23) ⊗ ⊗ Dt(B) = [ det (Λ I ) ]1/kα = [ det(Λ) ]1/k . | ⊗ α | | | is called the least common multiple of A and B. It follows that (26) is well defined. ✷ Moreover, Θ = lcm(A, B) is unique. (Refer to Fig. 1.) Remark 22 (1) Intuitively, Dt( A ) defines only the h i (4) Consider an equivalent class A , denote the unique “absolute value” of A . Because if there exists an h i h i irreducible element by A1, which is called the root A A such that det(A) < 0, then det(A I ) > 0. ∈ h i ⊗ 2 element. All the elements in A can be expressed as it is not able to define det( A ) uniquely over the h i h i class. Ai = A1 Ii, i =1, 2, . (24) (2) When det(A) = s, A A , we also use ⊗ ··· ∀ ∈ h i det( A ) = s. But when F = R, only det( A )=1 A is called the i-th element of A . Hence, an equiv- h i h i i h i makes sense. alent class A is a well ordered sequence as: h i Definition 23 (1) Let A n n. Then a modified A = A , A , A , . ∈ M × h i { 1 2 3 ···} trace is defined as
Next, we modify some classical matrix functions to make 1 Tr(A)= trace(A). (27) them available for the equivalence class. n
8 (2) Consider an equivalence of square matrices A , the Definition 26 A is said to possess a property, if every h i h i “trace” of A is defined as A A possesses this property. The property is also said h i ∈ h i to be consistent with the equivalence relation. Tr( A ) = Tr(A), A A . (28) h i ∈ h i In the following some easily verifiable consistent proper- ties are collected. Similar to Definition 20, we need and can easily prove (28) is well defined. These two functions will be used in Proposition 27 (1) Assume A is a square ma- ∈ M the sequel. trix. The following properties are consistent with the matrix equivalence ( ℓ or r): Next, we show that A = A1, A2, has a lattice ∼ ∼ 1 T A is orthogonal, that is A− = A ; h i { ···} • structure. det(A)=1; • trace(A)=0; Definition 24 ([2]) A poset L is a lattice if and only if • A is upper (lower) triangle; for every pair a,b L both sup a,b and inf a,b exist. • ∈ { } { } A is strictly upper (lower) triangle; • A is symmetric (skew-symmetric); Let A, B A . If B is a divisor (multiple) of A, then B • ∈ h i A is diagonal; is said to be proceeding (succeeding) A and denoted by • B A ( B A). Then is a partial order for A . (2) Assume A 2n 2n, n =1, 2, , and ≺ ≻ ≺ h i ∈M × ··· Theorem 25 ( A , ) is a lattice. 0 1 h i ≺ J = (29) Proof. Assume A, B A . It is enough to prove that 1 0 ∈ h i − the Λ = gcd(A, B) defined in (20) is the inf(A, B), and The following property is consistent with the matrix the Θ = lcm(A, B) defined in (23) is the sup(A, B). equivalence: To prove Λ = inf(A, B) we assume C A and C B, ≺ ≺ J ⋉ A + AT ⋉ J =0. (30) then we need only to prove that C Λ. Since C A ≺ ≺ and C B, there exist I and I such that C I = A ≺ p q ⊗ p Remark 28 As long as a property is consistent with an and C I = B. Now ⊗ q equivalence, then we can say if the equivalent class has the property or not. For instance, because of Proposition C Ip = A =Λ Iβ , 27 we can say A is orthogonal, det( A )=1, etc. ⊗ ⊗ h i h i C Iq = B =Λ Iα. ⊗ ⊗ 2.3 Lattice Structure on Mµ Hence Denote by C I I = Λ I I ⊗ p ⊗ q ⊗ β ⊗ q = Λ Iα Ip. µ := A m n m/n = µ . (31) ⊗ ⊗ M ∈M × It follows that Then it is clear that we have a partition as βq = αp. Since α and β are co-prime, we have p = mβ and q = nα, = , (32) M Mµ where m,n N. Then we have µ Q+ ∈ ∈[ C I = C I I =Λ I . where Q+ is the set of positive rational numbers. ⊗ p ⊗ m ⊗ β ⊗ β That is Remark 29 To avoid possible confusion, we assume the C I =Λ. fractions in Q+ are all reduced. Hence for each µ Q+, ⊗ m ∈ Hence, C Λ. there are unique integers µy and µx, where µy and µx are ≺ co-prime, such that To prove Θ = sup(A, B) assume D A and D B. µ ≻ ≻ µ = y . Then we can prove D Θ in a similar way. ✷ µ ≻ x
9 Definition 30 (1) Let µ Q , p and q are co-prime Next, we define ∈ + and p/q = µ. Then we denote by µy = p and µx = q i j i j as y and x are components of µ. = ∧ , Mµ ∧Mµ Mµ (2) Denote the spaces of various dimensions in µ as (35) i j i j M = ∨ . Mµ ∨Mµ Mµ i µ := iµy iµx , i =1, 2, . M M × ··· From above discussion, the following result is obvious:
α β Assume A , A , A A , and α β, then Proposition 32 Consider µ. The followings are α ∈Mµ β ∈Mµ α ∼ β | M Aα Ik = Aβ, where k = β/α. One sees easily that we equivalent: ⊗ α β can define an embedding mapping bdk : µ µ as M →M (1) α is a subspace of β; Mµ Mµ (2) α is a factor of β, i.e., α β; bdk(A) := A Ik. (33) | ⊗ (3) α β = α; Mµ ∧Mµ Mµ (4) α β = β. In this way, α can be considered as a subspace of β. Mµ ∨Mµ Mµ Mµ Mµ The order determined by this space-subspace relation is Using the order ⊏ defined by (34), it is clear that all the denoted as fixed dimension vector spaces i , i = 1, 2, form a Mµ ··· lattice. α ⊏ β. (34) Mµ Mµ Proposition 33 ( , ⊏) is a lattice with Mµ
If (34) holds, α is called a divisor of β, and β is sup α, β = α β, Mµ Mµ Mµ µ µ µ∨ called a multiple of α. M M M (36) µ α β α β M inf , = ∧ . Mµ Mµ Mµ Denote by i j = gcd(i, j) and i j = lcm(i, j). Using ∧ ∨ the order of (34), has the following structure. Mµ The following properties are easily verifiable.
i j Theorem 31 (1) Given µ and µ. The greatest Proposition 34 Consider the lattice ( µ, ⊏). i Mj M M common divisor is µ∧ , and the least common mul- i j M 1 tiple is µ∨ . (Please refer to Fig. 2) (1) It has a smallest (root) subspace µ = p q, M i j M M × (2) Assume A B, A µ and B µ. Then their where p, q are co-prime and p/q = µ. That is, ∼ ∈M ∈M i j greatest common divisor Λ = gcd(A, B) ∧ , ∈ Mµ and their least common multiple Θ = lcm(A, B) i 1 1 µ µ = µ, i j ∈ M ∧M M µ∨ . M i 1 = i . Mµ ∨Mµ Mµ M i j k µ∨ ∨ But there is no largest element. (2) The lattice is distributive, i.e., i j M j k Mµ∨ µ∨ i j k i j i k µ µ µ = µ µ µ µ , i j M k M ∧ M ∨M M ∧M ∨ M ∧M Mµ M µ µ i j k = i j i k . Mµ ∨ Mµ ∧Mµ Mµ ∨Mµ ∧ Mµ ∨Mµ i j j k M ∧ M ∧ is µ µ (3) For any finite set of spaces µ , s = 1, 2, , r. M u ··· There exists a smallest supper-space µ, u = i j k r M M ∧ ∧ i , such that µ ∨s=1 s • Fig. 2. Lattice Structure of Mµ is ⊏ u, s =1, 2, , r; Mµ Mµ ···
10 If then it is easy to see that ϕ : ( A , ) ( , ⊏) is • h i ≺ → Mα is ⊏ v , s =1, 2, , r, still a lattice homomorphism. Mµ Mµ ··· then u v Definition 39 Let (L, ) be a lattice and S L. If µ ⊏ µ. ≺ ⊂ M M (S, ) is also a lattice, it is called a sub-lattice of (L, ). ≺ ≺ Definition 35 ([16]) Let (L, ) and (M, ⊏) be two lat- ≺ Remark 40 Let ϕ : (H, ) (M, ⊏) be an injective tices. ≺ → (i.e., one-to-one) lattice homomorphism. Then ϕ : H → (1) A mapping ϕ : L M is called an order-preserving ϕ(H) is a lattice isomorphism. Hence ϕ(H) is a sub- → mapping, if ℓ ℓ implies ϕ(ℓ ) ⊏ ϕ(ℓ ). lattice of (M, ⊏). If we identify H with ϕ(H), we can 1 ≺ 2 1 2 (2) A mapping ϕ : L M is called a homomorphism, simply say that H is a sub-lattice of M. → and (L, ) and (M, ⊏) are said to be lattice homo- ≺ Definition 41 Let (L, ) and (M, ⊏) be two lattices. morphic, denoted by (L, ) (M, ⊏), if ϕ satisfies ≺ ≺ ≈ The product order := ⊏ defined on the product set the following condition: ⊂ ≺× L M := (ℓ,m) ℓ L,m M ϕ sup(ℓ1,ℓ2) = sup (ϕ(ℓ1), ϕ(ℓ2)) ; (37) × ∈ ∈ is: (ℓ ,m ) (ℓ ,m ) if and only if ℓ ℓ and m ⊏ and 1 1 ⊂ 2 2 1 ≺ 2 1 m2. ϕ inf(ℓ ,ℓ ) = inf (ϕ(ℓ ), ϕ(ℓ )) . (38) 1 2 1 2 Theorem 42 Let (L, ) and (M, ⊏) be two lattices. ≺ Then (L M, ⊏) is also a lattice, called the product (3) A homomorphism ϕ : L M is called an isomor- × ≺× → ⊏ phism, and (L, ) and (M, ⊏) are said to be lattice lattice of (L, ) and (M, ). ≺ ≺ isomorphic, denoted by (L, ) ≅ (M, ⊏), if ϕ is one ≺ Proof. Let (ℓ , m ) and (ℓ , m ) be two elements in L to one and onto. 1 1 2 2 × M. Denote by ℓs = sup(ℓ1, ℓ2) and ms = sup(m1, m2). i Then (ℓs, ms) (ℓj , mj ), j = 1, 2. To see (ℓs, ms) = Assume A µ is irreducible, define ϕ : A µ as ⊃ ∈M h i→M sup ((ℓ , m ), (ℓ , m )) let (ℓ, m) (ℓ , m ), j =1, 2. 1 1 2 2 ⊃ j j ij Then ℓ ℓj and m ⊐ mj , j =1, 2. It follows that ℓ ℓs ϕ(Aj ) := µ , j =1, 2, , (39) ≻ ≻ M ··· and m ⊐ m . That is, (ℓ, m) (ℓ , m ). We conclude s ⊃ s s then it is easy to verify the following result. that (ℓs, ms) = sup ((ℓ1, m1), (ℓ2, m2)) . Proposition 36 The mapping ϕ : A defined h i→Mµ in (39) is a lattice homomorphism from ( A , ) to Similarly, we set ℓ = inf(ℓ , ℓ ) and m = inf(m , m ), h i ≺ i 1 2 i 1 2 ( µ, ⊏). then we can prove that M
Next, we consider the µ for different µ’s. It is also easy (ℓ , m ) = inf ((ℓ , m ), (ℓ , m )) . M i i 1 1 2 2 to verify the following result. ✷
Proposition 37 Define a mapping ϕ : µ λ as M →M Finally, we give an example to show that an order- ϕ i := i . preserving mapping may not be an lattice homomor- Mµ Mλ phism. The mapping ϕ : ( , ⊏) ( , ⊏) is a lattice iso- Mµ → Mλ morphism. Example 43 Consider the product of two lattices ( µ, ⊏) and ( λ, ⊏). Define a mapping Example 38 According to Proposition 37, if we still as- M M i sume A and replace µ in (39) by any α Q , that ϕ : ( µ, ⊏) ( λ, ⊏) ( µλ, ⊏) ∈Mµ ∈ + M × M → M is, define as ϕ(A ) := ij , j =1, 2, , ϕ p q := pq . j Mα ··· Mµ ×Mλ Mµλ
11 Assume i ⊏ j and s ⊏ t , then i j and s t, Proposition 44 The algebraic system ( , ⋉) is a Mµ Mµ Mλ Mλ | | M and by the definition of product lattice, we have monoid.
i s ⊏ ⊏ j t . Proof. The associativity comes from the property of ⋉ Mµ ×Mλ × Mµ ×Mλ (refer to (5)). The identity element is 1. ✷
Since is jt, we have | One sees easily that this monoid covers numbers, vectors, and matrices of arbitrary dimensions. i s is ϕ µ = , M ×Mλ Mµλ (40) In the following some of its useful sub-monoids are pre- ⊏ jt = ϕ j t . Mµλ Mµ ×Mλ sented: That is, ϕ is an order-preserving mapping. (k): •M Consider two elements in product lattice as α = p (k) := kα kβ , µ M M × s q t M × α N β N λ and β = µ λ. Following the same arguments [∈ [∈ M M ×M where k N and k> 1. in the proof of Theorem 42, one sees easily that ∈ It is obvious that (k) < . (In this section A< M M p q s t B means A is a submonoid of B). This sub-monoid lcm(α, β)= ∨ ∨ , Mµ ×Mλ is useful for calculating the product of tensors over k p q s t gcd(α, β)= ∧ ∧ . dimensional vector space [1]. It is particularly useful Mµ ×Mλ for k-valued logical dynamic systems [6], [7]. When Then k = 2, it is used for Boolean dynamic systems. (p q)(s t) ϕ(lcm(α, β)) = ∨ ∨ , In this sub-monoid the STP can be defined as fol- Mµλ (p q)(s t) lows: ϕ(gcd(α, β)) = ∧ ∧ . Mµλ Consider Definition 45 (1) Let X Fn be a column vector, Y ps ∈ ∈ ϕ(α)= , Fm a row vector. Mµλ qt Assume n = pm (denoted by X p Y ): Split X ϕ(β)= µλ. • ≻ M into m equal blocks as Now (ps) (qt) T T T T lcm(ϕ(α), ϕ(β)) = µλ ∨ , X = X ,X , ,X , M 1 2 ··· m (ps) (qt) ∧ gcd(ϕ(α), ϕ(β)) = µλ . p M where Xi F , i. Define It is obvious that in general ∈ ∀ m p (p q)(s t) = (ps) (qt), X ⋉ Y := Xsys F . ∈ ∨ ∨ 6 ∨ s=1 X as well as, Assume np = m (denoted by X Y ): Split Y • ≺p (p q)(s t) = (ps) (qt). into n equal blocks as ∧ ∧ 6 ∧ Y = [Y , Y , , Y ] , Hence, ϕ is not a homomorphism. 1 2 ··· n where Y Fp, i. Define 2.4 Monoid and Quotient Monoid i ∈ ∀ m A monoid is a semigroup with identity. We refer readers X ⋉ Y := x Y Fp. s s ∈ to [28], [25], [19] for concepts and some basic properties. s=1 X Recall that (2) Assume A m n, B p q, where n = tp ∈ M × ∈ M × := m n. (denoted by A t B), or nt = p (denoted by A t M M × ≻ ≺ m N n N B). Then [∈ [∈ We have the following algebraic structure. A ⋉ B := C = (ci,j ) ,
12 where Proposition 47 ci,j = Rowi(A) ⋉ Colj (B). Ξr <Ξ< . (41) Remark 46 (1) It is easy to prove that when A B M ≺t or B A for some t N, this definition of left STP ≺t ∈ coincides with Definition 1. Though this definition Proof. Assume A m n, B p q and A, B Ξ, is not as general as Definition 1, it has clear physical ∈ M × ∈ M × ∈ then m n and p q. Let t = n p. Then AB meaning. Particularly, so far this definition covers ≤ ≤ mt ∨tq ∈ mt tq . It is easy to see that n p , so AB Ξ. almost all the applications. M n × p ≤ ∈ The second part is proved. (2) Unfortunately, this definition is not suitable for right STP. This is a big difference between left and As for the first part, Assume A, B Ξr. Then right STPs. ∈
: rank(AB) = rank A I B I • V ⊗ t/n ⊗ t/p := k 1. V M × rank A I B I k N t/n t/p [∈ ≥ ⊗ ⊗ It is obvious that < . T T 1 V M B ( BB )− It/p This sub-monoid consists of column vectors. In this ⊗ = rank A I I I sub-monoid the STP is degenerated to Kronecker ⊗ t/n p ⊗ t/p product. = rank A I = mt/n. ⊗ t/n We denote by T the sub-monoid of row vectors. It is V Hence, AB Ξr. ✷ also clear that T < . ∈ V M : Similarly, we can define the set of “tall” matrices Π and • L c := A Col(A) ∆ , s N . the set of matrices with full column rank Π . We can L { ∈ M | ⊂ s ∈ } It is obvious that < . This sub-monoid consists also prove that L M of all logical matrices. It is used to express the product Πc < Π < . (42) of logical mappings. M
: • P Next, we consider the quotient space := A Col(A) Υ , for some s N . P { ∈ M | ⊂ s ∈ } Σ := / . It is obvious that < . This monoid is useful for M M ∼ P M probabilistic logical mappings. Definition 48 ([41]) (1) A nonempty set S with a bi- (k): nary operation σ : S S S is called an algebraic • L × → (k) := (k). system. L L∩M It is obvious that (k) < < . We use it for (2) Assume is an equivalence relation on an algebraic L L M ∼ k-valued logical mappings. system (S, σ). The equivalence relation is a congru- ence relation, if for any A,B,C,D S, A C and ∈ ∼ B D, we have Next, we define the set of “short” matrices as ∼
Ξ := A m n m n , AσB CσD. (43) { ∈M × | ≤ } ∼ and its subset Proposition 49 Consider the algebraic system ( , ⋉) M with the equivalence relation = . The equivalence re- Ξr := A A is of full row rank . ∼ ∼ℓ { ∈ M | } lation is congruence. ∼
Then we have the following result. Proof. Let A A˜ and B B˜. According to Theorem ∼ ∼
13 17, there exist U m n and V p q such that ✷ ∈M × ∈M ×
A = U I , A˜ = U I ; Definition 52 (1) Define ⊗ s ⊗ t ˜ µ B = V Iα, B = V Iβ. := z . ⊗ ⊗ M Mµ z Z [∈ Denote Then µ is closed under operator ⋉. M n p = r, ns αp = rξ, nt βp = rη. (2) Set ∨ ∨ ∨ Σµz = µz / , Then M ∼ and define µ Σ := Σµz . A ⋉ B = U Is I V Iα I rξ/ns rξ/αp z Z ⊗ ⊗ ⊗ ⊗ ∈ µ [ = U I r/n V Ir/p Iξ. Then Σ is also closed under operator ⋉. ⊗ ⊗ ⊗ (3) A , B Σµ is said to be power equivalent, de- h i h i ∈ Similarly, we have noted by A B , if there exists an integer z Z h i∼p h i ∈ such that both A , B Σµz . Denote A˜ ⋉ B˜ = U I V I I . h i h i ∈ ⊗ r/n ⊗ r/p ⊗ η A := B B p A (44) Hence we have A ⋉ B A˜ ⋉ B˜. ✷ hh ii {h i | h i∼ h i} ∼ Remark 53 It is obvious that ⋉ is consistent with . ∼p According to Proposition 49, we know that ⋉ is well de- Hence ⋉ is well defined on the set of equivalent classes as fined on the quotient space Σ . Moreover, the following M result is obvious: A ⋉ B := A ⋉ B . (45) hh ii hh ii hh ii Proposition 50 (1) (Σ , ⋉) is a monoid. M (2) Let < be a sub-monoid. Then / is a sub- S M S ∼ Then we have the following group structure. monoid of Σ , that is, M µ Theorem 54 (Σ / p, ⋉) is a group, which is isomor- / < Σ . ∼ phic to (Z, +). Precisely, assume A z then Ψ: S ∼ M ∈ Mµ Σµ/ Z is defined as ∼p→ Since the S in Proposition 50 could be any sub-monoid Ψ ( A ) := z, of . All the aforementioned sub-monoids have their hh ii M corresponding quotient sub-monoids, which are the sub- which is a group isomorphism. monoids of Σ . For instance, / , / , etc. are the M V ∼ L ∼ sub-monoids of Σ . M 2.6 Semi-tensor Addition and Vector Space Structure of Σ 2.5 Group Structure on µ µ M Proposition 51 Assume A and B then Definition 55 Let A, B µ. Precisely, A m n, ∈Mµ1 ∈Mµ2 ∈M ∈M × A ⋉ B . That is, the operation ⋉ is a mapping B p q, and m/n = p/q = µ. Set t = m p. Then ∈Mµ1µ2 ∈M × ∨ (1) the left semi-tensor addition (STA) of A and B, ⋉ : µ µ µ µ . 1 2 1 2 ± M ×M →M denote by , is defined as ± Proof. Assume A m n and B p q, where µ1 = A B := A It/m + B It/p . (46) ∈M × ∈M × ⊗ ⊗ m/n and µ = p/q, and t = n p. Then 2 ∨ Correspondingly, the left semi-tensor subtraction A ⋉ B = A I B I (STS) is defined as ⊗ t/n ⊗ t/p ± mt/n qt/p µ1µ2 . A B := A ( B). (47) ∈ M × ⊂M ⊢ −
14 ˜ ± ˜ ± ✷
(2) The right STA of A and B, denote by ± , is defined as (52) and (53) imply that A B A B. ∼
ℓ r A ± B := It/m A + It/p B . (48) Define the left and right quotient spaces Σ and Σ re- ⊗ ⊗ µ µ spectively as Correspondingly, the right STS is defined as Σℓ := / ; (54) µ Mµ ∼ℓ A B := A ± ( B). (49) r ⊣ − Σ := µ/ r . (55) µ M ∼ ±
Remark 56 Let σ , , ± , be one of the four bi- ∈ { ⊢ ⊣}
nary operators. Then it is easy to verify that ± According to Theorem 57, the operation (or ) can be ℓ ⊢ (1) if A, B , then AσB ; extended to Σµ as ∈Mµ ∈Mµ (2) If A and B are as in Definition 55, then AσB
∈ ± ± t nt ; A B :=< A B > , M × m ℓ ℓ ℓ (3) Set s = n q, then s/n = t/m and s/q = t/p. h i h i (56) ∨ A B :=< A B > , A , B Σℓ . So σ can also be defined by using column numbers h iℓ ⊢ h iℓ ⊢ ℓ h iℓ h iℓ ∈ µ respectively, e.g.,
± r
A B := A I + B I , Similarly, we can define ± (or ) on the quotient spaceΣ ⊗ s/n ⊗ s/q ⊣ µ as etc.
A r ± B r :=< A ± B >r, Theorem 57 Consider the algebraic system ( µ, σ), h i h i (57)
± M r A B :=< A B >r, A , B Σµ. where σ , and = (or σ ± , and = ). r r r r ∈{ ⊢} ∼ ∼ℓ ∈{ ⊣} ∼ ∼r h i ⊣ h i ⊣ h i h i ∈ Then the equivalence relation is a congruence relation ∼ with respect to σ. The following result is important, and the verification is Proof. We prove one case, where σ = ± and = . straightforward. ∼ ∼ℓ Proofs for other cases are similar. Theorem 58 Using the definitions in (56) (correspond- ˜ ˜ ˜ ℓ ± Assume A ℓ A and B ℓ B. Set P = gcd(A, A) and ingly, (57)), the quotient space Σ , (correspondingly, ∼ ∼ µ Q = gcd(B,B˜ ), then r Σ , ± ) is a vector space. µ