<<

arXiv:1605.09523v4 [math.GR] 19 Sep 2017 arxadto oacaso w arcswt ieetdime different with matrices two of class a to gener addition the Then investigated. are properties and structures ude ieeta aiod v)bnldLegopadLi and space vector Lie a bundled to form (iii) (vi) (distance); manifold; norm differential and bundled product inner (ii) stru structure; analytic th and geometric, equivalence, algebraic, each many Under equivalence, equivalence. each equivalenc for vector tablished and (M-equivalence) equivalence matrix e words: Key problems. practical to approach matrix using M/-,ltie oooy brbnl,bnldmanifold bundled bundle, fiber topology, , (M-/V-), sabuddoeao ntncsaiysur u nldss defined. includes are but etc. square eigenvector necessarily and eigenvalue (not function, operator bounded a is rdc otoabtaymtie.UdrSPtesto matri of (STP), set the product STP semi-tensor Under matrices. the arbitrary two called to product, product matrix new A Abstract orsodn uhr aza hn.Tl:+61 8254 10 +86 Tel.: Cheng. 1232. Daizhan 61733018. and author: 61773371 Corresponding Grants under China of Foundation ⋆ Preliminaries I. as given is contents of list a follows. read, in convenience For Contents 1.1 Preliminaries 1 . Symbols 1.3 Introduction 1.2 Contents 1.1 oiae ySPo arcs w id feuvlne fma of equivalences of kinds two matrices, of STP by Motivated noewr,ti e arxter vroe h dimensiona the overcomes theory matrix new this word, one In hswr sspotdprl yNtoa aua Science Natural National by partly supported is work This e aoaoyo ytm n oto,AS,CieeAcadem Chinese AMSS, Control, and Systems of Laboratory Key eitno rdc/diinSPSA,vco product/ vector product/addition(STP/STA), Semi-tensor V n matrix a and , nEuvlneo Matrices of Equivalence On A fabtaydmnini osdrda noeao (linear operator an as considered is arbitrary of aza Cheng Daizhan trshv enpsdt h utetsae hc nld ( include which space, quotient the to posed been have ctures ooy i)afie udesrcue aldtedsrt bu discrete the called structure, bundle fiber a (iv) pology; lzdmti diini loitoue,wihetnsth extends which introduced, also is addition matrix alized LeagbaLegroup(BM/BLA/BLG). /Lie /Lie orsodn utetsaebcmsavco pc.Unde space. vector a becomes space quotient corresponding e Veuvlne epciey h atc tutr has structure lattice The respectively. (V-equivalence) e ur arcsa pca ae,tegnrlzdcharacte generalized the case), special a as matrices quare ler.UdrVeuvlne etr fdffrn dimens different of vectors V-equivalence, Under algebra. e nsions. e eoe ood(eigopwt dniy.Sm relat Some identity). with (semi-group a becomes ces are ncransne tpoie uhmr reo for freedom more much provides It sense. certain in barrier l sbifl eiwd h T xed h lsia matrix classical the extends STP The reviewed. briefly is I -qiaec n atc Structure Lattice and M-equivalence II. I.Tplg nMeuvlneSpace M-equivalence on Topology III. . Σ 3.5 . eitno diinadVco pc Structure Space Vector and Addition Semi-tensor 2.6 of Structure Group Monoid 2.5 Quotient and Monoid 2.4 on Structure Lattice 2.3 Matrices of M-Equivalence 2.2 Matrices of STP 2.1 . ne rdc on Product Inner 3.4 on Frame Coordinate on 3.3 Structure Bundle Fiber 3.2 Sub- via Topology 3.1 rcs(nldn etr)aeitoue,wihaecalle are which introduced, are vectors) (including trices fΣ of diinV/A,mti/etrequivalence matrix/vector addition(VP/VA), µ fSine,Biig109,P.R.China 100190, Beijing Sciences, of y saMti Space Matric a as µ ⋆ M M µ M M µ µ µ apn)on mapping) M µ V When . classical e de (v) ndle; )lattice i) enes- been ristic M- r ions ed A d 3.6 Sub-space of Σµ A m n and B p q. Then the “product”, A B, ∈M × ∈M × × is well posed, if and only if, n = p; the “addition” A+B, IV. Differential Structure on M-equivalence Space is defined, if and only if, m = p and n = q. Though there are some other matrix products such as Kronecker prod- 4.1 Bundled Manifold uct, Hadamard product etc., but they are of different r 4.2 C Functions on Σµ stories [24]. 4.3 Generalized Inner Product 4.4 Vector Fields The main purpose of this paper is to develop a new ma- 4.5 Integral Curves trix theory, which intends to overcome the dimension 4.6 Forms barrier by extending the matrix product and matrix ad- 4.7 Tensor Fields dition to two matrices which do not meet the classical di- mension requirement. As generalizations of the classical V. Lie Algebra Structure on Square M-equivalence Space ones, they should be consistent with the classical defini- tions. That is, when the dimension requirements of two 5.1 Lie Algebra on Square M-equivalence Space argument matrices in classical matrix theory are satis- 5.2 Bundled Lie Algebra fied, the newly defined operators should coincide with 5.3 Bundled Lie Sub-algebra the original one. 5.4 Further Properties of gl(F) Because of the extension of the two fundamental oper- VI. Lie Group on Nonsingular M-equivalence Space ators, many related concepts can be extended. For in- stance, the characteristic functions, the eigenvalues and 6.1 Bundled Lie group eigenvectors of a can be extended to cer- 6.2 Relationship with gl(F) tain non-square matrices; Lie can 6.3 Lie Subgroup of GL(F) also be extended to dimension-varying square matrices. 6.4 Symmetric Group All these extensions should be consistent with the clas- sical ones. In one word, we are developing the classical VII. V-equivalence matrix theory but not violating any of the original ma- trix theory. 7.1 Equivalence of Vectors of Different Dimensions 7.2 Structure on V-equivalence Space When we were working on generalizing the fundamen- 7.3 Inner Product and Linear Mappings tal matrix operators we meet a serious problem: Though 7.4 Type-1 Invariant Subspace the extended operators are applicable to certain sets of 7.5 Type-2 Invariant Subspace matrices with different dimensions, they fail to be vec- 7.6 Higher Order Linear Mapping tor space anymore. This drawback is not acceptable, be- 7.7 Invariant Subspace on V-equivalence Space cause it blocked the way to extend many nice algebraic or 7.8 Generalized Linear System geometric structures in matrix theory, such as Lie alge- braic structure, manifold structure etc., to the enlarged VIII. Concluding Remarks set, which includes matrices of different dimensions. To overcome this obstacle, we eventually introduced cer- 1.2 Introduction tain equivalence relations. Then the quotient spaces, called the equivalence spaces, become vector spaces. Two Matrix theory and calculus are two classical and funda- equivalence relations have been proposed. They are ma- mental mathematical tools in modern science and tech- trix equivalence (M-equivalence) and vector equivalence nology. There are two mostly used operators on the set (V-equivalence). of matrices: conventional matrix product and matrix ad- dition. Roughly speaking, the object of matrix theory is Then many nice algebraic, analytic, and geometric struc- ( , +, ), where is the set of all matrices. Unfortu- tures have been developed on the M-equivalence spaces. M × M nately, unlike the arithmetic system (R, +, ), in matrix They are briefly introduced as follows: × theory both “+” and “ ” are restricted by the matrix × dimensions. Precisely speaking: consider two matrices Lattice structure: The elements in each equivalent •

2 class form a lattice. The class of spaces with different Newton differential dynamic systems and their control dimensions also form a lattice. The former and the problems [4], [51], [40]. A basic summarization was given latter are homomorphic. The lattices obtained for M- in [5]. equivalence and V-equivalence are isomorphic. Topological structure: A topological structure is pro- Since 2008, STP has been used to formulate and ana- • posed to the M-equivalence space, which is built by lyze Boolean networks as well as general logical dynamic using topological sub-base. It is then proved that un- systems, and to solve control design problems for those der this topology the equivalence space is Hausdorff systems. It has then been developed rapidly. This is wit- (T2) space and is second countable. nessed by hundreds of research papers. A multitude of Inner product structure: An inner product is proposed applications of STP include (i) logical dynamic systems • on the M-equivalence space. The norm (distance) is [6], [18], [32]; (ii) systems biology [54], [20]; (iii) graph also obtained. It is proved that the topology induced theory and formation control [45], [55]; (iv) circuit de- by this norm is the same as the topology produced by sign and failure detection [33], [34], [9]; (v) game theory using the topological sub-base. [21], [10], [11]; (vi) finite automata and symbolic dynam- Fiber bundle structure. A fiber bundle structure is ics [50], [57], [23]; (vii) coding and cryptography [58], • proposed for the set of matrices (as total space) and [56]; (viii) fuzzy control [8], [17]; (ix) some engineering the equivalent classes (as base space). The bundle applications [49], [36]; and many other topics [7], [38], structure is named the discrete bundle, because each [52], [59], [60], [37]; just to name a few. fiber has discrete topology. Bundled manifold structure: A (dimension-varying) As a generalization of conventional matrix product, STP • manifold structure is proposed for the M-equivalence is applicable to two matrices of arbitrary dimensions. In space. Its coordinate charts are constructed via the addition, this generalization keeps all fundamental prop- discrete bundle. Hence it is called a bundled manifold. erties of conventional matrix product available. There- Bundled Lie algebraic structure: A Lie algebra struc- fore, it becomes a very conventional tool for investigat- • ture is proposed for the M-equivalence space. The Lie ing many matrix expression related problems. algebra is of infinite dimensional, but almost all the properties of finite dimensional Lie remain Recently, the journal IET Control Theory & Applica- available. tions has published a special issue “Recent Develop- Bundled Lie group structure: For the M-equivalence ments in Logical Networks and Its Applications”. It pro- • classes of square nonsingular matrices a group struc- vides many up-to-date results of STP and its applica- ture is proposed. It has also the dimension-varying tions. Particularly, we refer to a survey paper [39] in this manifold structure. Both the algebraic and the geo- special issue for a general introduction to STP with ap- metric structures are consistent and hence it becomes plications. a Lie group. The relationship of this Lie group with the bundled Lie algebra is also investigated. Up to this time, the main effort has been put on its appli- cations. Now when we start to explore the mathematical Under V-equivalence, all the vectors of varying dimen- foundation of STP, we found that the most significant sions form a vector space , and any matrix A can be characteristic of STP is that it overcomes the dimen- V considered as a linear operator on . A very important sion barrier. After serious thought, it can be seen that V class of A, called the bounded operator, is investigated in fact STP is defined on equivalent classes. Following in detail. For a bounded operator A, which could be non- this thought of train, the matrix theory on equivalence square, its characteristic function is proposed. Its eigen- space emerges. The outline of this new matrix theory is values and the corresponding eigenvectors are obtained. presented in this manuscript. The results in this paper A generalized A-invariant subspace has been discussed are totally new except the concepts and basic properties in detail. of STP, which are presented in subsection 2.1.

This work is motivated by the semi-tensor product 1.3 Symbols (STP). The STP of matrices was proposed firstly and formally in 2001 [3]. Then it has been used to some For statement ease, we first give some notations:

3 (1) N: Set of natural numbers (i.e., N = 1, 2, ); (24) The set of all matrices: { ···} (2) Z: set of integers; ∞ ∞ (3) Q: Set of rational numbers, (Q+: Set of positive = i j . M M × rational numbers); i=1 j=1 (4) R: of real numbers; [ [ (5) C: Field of complex numbers; (25) (6) F: certain field of characteristic number 0 (Partic- ularly, we can understand F = R or F = C). q := A column number of A is q . ·× F M ∈M (7) m n: setof m n dimensional matrices over field M × ×  F. When the field F is obvious or does not affect the (26) The set of matrices: discussion, the superscript F can be omitted, and µ := A Mm n m/n = µ as a default: F = R can be assumed. M ∈ × (8) Col(A) (Row(A)): the set of columns (rows) of A;  (27) Lattice homomorphism: Coli(A) (Rowi(A)): the i-th column (row) of A. ≈ (28) Lattice isomorphism: ≅ (9) k = 1, 2, , k , := 2; Di { ··· } D D (29) Vector order: 6 (10) δn: the i-th column of the In; i (30) Vector space order: (11) ∆n = δn i =1, 2, ,n ; ⊑ { | ··· } (31) Matrix order: (12) L m r is a , if Col(L) ∆m. ≺ ∈ M × ⊂ ⊏ The set of m r logical matrices is denoted as (32) Matrix space order: × (33) The overall matrix quotient space: m r; × L i1 i2 ir (13) Assume A m r. Then L = δm,δm, ,δm . It ∈ L × ··· Σ = / . is briefly denoted as M   M ∼ (34) The µ-matrix quotient space: L = δ [i ,i , ,i ]. m 1 2 ··· r Σµ := µ/ . (14) Let A = (ai,j ) m n, and ai,j 0, 1 , i, j. M ∼ ∈ M × ∈ { } ∀ Then A is called a . Denote the set (35) The µ = 1-matrix quotient space: of m n dimensional Boolean matrices by m n. × B × (15) Set of probabilistic vectors: Σ := / . M1 ∼ k (36) The set of all vectors: Υ := (r , r , , r )T r 0, r =1 . k 1 2 ··· k i ≥ i ( i=1 ) X ∞ = , (Note that real Rn). V Vn Vn ∼ (16) Set of probabilistic matrices: n=1 [ (37) The vector quotient space under V-equivalence: Υm n := M m n Col(M) Υm . × ∈M × ⊂  Ω := / . (17) a b: a divides b. V V ↔ | (18) m n = gcd(m,n): The greatest common divisor ∧ (38) The vector quotient subspace under V-equivalence: of m,n. (19) m n = lcm(m,n): The least common multiple of i [i, ] ∨ Ω := · / . m,n. V V ↔ (20) ⋉ (⋊): The left (right) semi-tensor product of ma- (39) The matrix quotient space under V-equivalence: trices. (21) ⋉~ (⋊~ ): The left (right) vector product of matrices. Ω := / . (22) ( , ): The M-equivalence ((left, right) matrix M M ↔ ∼ ∼ℓ ∼r equivalence). (40) The matrix quotient subspace under V-equivalence: (23) ( ℓ, r): The V-equivalence ((left,right) vector ↔ ↔ ↔ n equivalence). Ω := n/ . M M·× ↔

4 (41) Given a (Cr) manifold M (r could be or ω), Associativity and distribution are two fundamental ∞ its tangent space is T (M); properties of conventional matrix product. When the • its cotangent space is T ∗(M); product is generalized to STP, these two properties • the set of Cr functions is Cr(M); remain available. • the set of vector fields is V r(M); the set of co- • r vector fields is V ∗ (M). Proposition 3 1. (Distributive Law) the set of tensor fields on M with covariant order α • and contravariant order β is Tα(M); when β =0 F ⋉ (aG bH)= aF ⋉ G bF ⋉ H, β ± ± α it becomes T (M). ((aF bG) ⋉ H = aF ⋉ H bG ⋉ H, a,b F. ± ± ∈ (4) 2 M-equivalence and Lattice Structure 2. (Associative Law)

2.1 STP of Matrices (F ⋉ G) ⋉ H = F ⋉ (G ⋉ H). (5)

This section gives a brief review on STP. We refer to [5], [6], [7] for details. The following proposition is inherited from the conven- tional matrix product.

Definition 1 Let A m n, B p q, and t = n p ∈M × ∈M × ∨ be the least common multiple of n and p. Then Proposition 4 1.

T T T (1) the left STP of A and B, denoted by A⋉B, is defined (A ⋉ B) = B ⋉ A . (6) as 2. Assume A and B are invertible, then

A ⋉ B := A It/n B It/p , (1) 1 1 1 ⊗ ⊗ (A ⋉ B)− = B− ⋉ A− . (7)   where is the [24]; ⊗ (2) the right STP of A and B is defined as The following proposition shows that the STP has cer- tain . A ⋊ B := I A I B . (2) t/n ⊗ t/p ⊗   Proposition 5 Given A m n. In the following we mainly discuss the left STP, and ∈M × briefly call the left STP as STP. Most of the properties 1. Let Z Rt be a column vector. Then of left STP have their corresponding ones for right STP. ∈ Please also refer to [5] or [6] for their major differences. ZA = (I A)Z. (8) t ⊗ Remark 2 If n = p, both left and right STP defined in 2. Let Z Rt be a row vector. Then ∈ Definition 1 degenerate to the conventional matrix prod- AZ = Z(I A). (9) uct. That is, STP is a generalization of the conventional t ⊗ matrix product. Hence, as a default, in most cases the symbol ⋉ can be omitted (but not ⋊). That is, unless To explore further commutating properties, we intro- elsewhere stated throughout this paper duce a swap matrix.

Definition 6 ([24]) A swap matrix W[m,n] mn mn AB := A ⋉ B. (3) ∈M × is defined as follows:

The following proposition shows that this generalization W[m,n] = δmn[1,m +1, , (n 1)m + 1; ··· − not only keeps the main properties of conventional matrix 2,m +2, , (n 1)m + 2; (10) product available, but also adds some new properties such ··· − as certain commutativity. ; m, 2m, ,nm]. ··· ···

5 The following proposition shows that the swap matrix is where W = W[n,k]. Then orthogonal. B W −1(I A)W e = e k ⊗ Proposition 7 1 diag(A,A, ,A) = W − e ··· W

T 1 1 A A A W[m,n] = W[−m,n] = W[n,m]. (11) = W − diag(e ,e , ,e )W ··· 1 A A = W − I e W = e I . k ⊗ ⊗ k The fundamental function of the swap matrix is to   Remark 12 Comparing the product of numbers with the “swap” two factors. product of matrices, two major differences are (i) matrix product has dimension restriction while the scalar product Proposition 8 1. Let X Rm, Y Rn be two col- ∈ ∈ has no restriction; (ii) the matrix product is not commu- umn vectors. Then tative while the scalar product is. When the conventional matrix product is extended to STP, these two weaknesses W[m,n] ⋉ X ⋉ Y = Y ⋉ X. (12) have been eliminated in certain degree. First, the dimen- sion restriction has been removed. Second, in addition to 2. Let X Rm, Y Rn be two row vectors. Then ∈ ∈ Proposition 5, which shows certain commutativity, the use of swap matrix also provides certain commutativity X ⋉ Y ⋉ W[m,n] = Y ⋉ X. (13) property. All these improvements make the STP more convenient in use than the conventional matrix product.

The swap matrix can also swap two factor matrices of 2.2 M-equivalence of Matrices the Kronecker product [5], [47]. The set of all matrices (over certain field F) is denoted Proposition 9 Let A m n and B p q. Then by , that is ∈M × ∈M × M

W[m,p](A B)W[q,n] = B A. (14) ∞ ∞ ⊗ ⊗ = m n. M M × m=1 n=1 [ [ Remark 10 Assume A n n and B p p are ∈ M × ∈ M × square matrices. Then (14) becomes It isobviousthat STP is anoperatordefined as ⋉(or ⋊): . Observing the definition of STP carefully, W[n,p](A B)W[p,n] = B A. (15) M×M→ M ⊗ ⊗ it is not difficult to find that when we use STP to multiply As a consequence, A B and B A are similar. A with B, instead of A itself, we modify A by Kronecker ⊗ ⊗ multiplying different sizes of identity to multiply differ- The following example is an application of Proposition ent B’s. In fact, STP multiplies an equivalent class of A, precisely, A = A, A I , A I , , with an equiva- 9. h i { ⊗ 2 ⊗ 3 ···} lent class of B, that is, B = B, B I , B I , . h i { ⊗ 2 ⊗ 3 ···} Example 11 Prove Motivated by this fact, we first propose an equivalence over set of matrices, called the ( ℓ A Ik A e ⊗ = e Ik. (16) ∼ ⊗ or r). Then STP can be considered as an operator over ∼ the equivalent classes. We give a rigorous definition for the equivalence. Assume A n n is a square matrix and B = A Ik. ∈M × ⊗ Note that Definition 13 Let A, B be two matrices. ∈M 1 W (A I )W − = I A = diag (A, A, , A), (1) A and B are said to be left matrix equivalent (LME), ⊗ k k ⊗ ··· denoted by A B, if there exist two identity ma- k ∼ℓ | {z } 6 trices I , I , s,t N, such that Note that α and β are co-prime. Comparing the entries s t ∈ of both sides of (19), it is clear that (i) the diagonal A Is = B It. elements of all A are the same; (ii) all other elements ⊗ ⊗ ii (A , j = i) are zero. Hence A = b I . Similarly, we ij 6 11 β (2) A and B are said to be right matrix equivalent have B = a11Iα. But (17) requires a11 = b11, which is (RME), denoted by A r B, if there exist two the required λ. The conclusion follows. ✷ ∼ identity matrices I , I , s,t N, such that s t ∈ Theorem 17 (1) If A ℓ B, then there exists a matrix I A = I B. ∼ s ⊗ t ⊗ Λ such that

Remark 14 It is easy to verify that the LME ℓ (simi- ∼ A =Λ Iβ, B =Λ Iα. (20) larly, RME r) is an equivalence . That is, it is (i) ⊗ ⊗ ∼ self-reflexive (A A); (ii) symmetric (if A B, then ∼ℓ ∼ℓ B A); and (iii) transitive (if A B, and B C, (2) In each class A ℓ there exists a unique A1 A ℓ, ∼ℓ ∼ℓ ∼ℓ h i ∈ h i then A C). such that A1 is left irreducible. ∼ℓ Definition 15 Given A . Proof. ∈M (1) The left equivalent class of A is denoted by (1) Assume A B, that is, there exist I and I such ∼ℓ α β that A := B B A ; h iℓ { | ∼ℓ } A Iα = B Iβ. (21) (2) The right equivalent class of A is denoted by ⊗ ⊗ Without loss of generality, we assume α and β are A := B B r A . h ir { | ∼ } co-prime. Otherwise, assume their greatest com- mon divisor is r = α β, the α and β in (21) can (3) A is left (right) reducible, if there is an Is, s 2, and ∧ ≥ be replaced by α/r and β/r respectively. a matrix B, such that A = B Is (correspondingly, ⊗ Assume A m n and B p q. Then A = Is B). Otherwise, A is left (right) irreducible. ∈M × ∈M × ⊗

Lemma 16 Assume A β β and B α α, where mα = pβ, nα = qβ. ∈M × ∈M × α, β N, α and β are co-prime, and ∈ Since α and β are co-prime, we have A I = B I . (17) ⊗ α ⊗ β m = sβ, n = tβ, p = sα, q = tα. Then there exists a λ F such that ∈ Split A and B into block forms as A = λIβ , B = λIα. (18)

A A B B 11 ··· 1t 11 ··· 1t Proof. Split A Iα into equal size blocks as  .   .  ⊗ A = . , B = . ,         A11 A1α As1 Ast Bs1 Bst ···  ···   ···   .      A Iα = . ⊗   where Ai,j β β and Bi,j α α, i =   ∈ M × ∈ M × Aα1 Aαα 1, ,s, j =1, ,t. Now (21) is equivalent to  ···  ··· ···   where Aij β β, i, j =1, , α. Then we have Aij Iα = Bij Iβ, i, j. (22) ∈M × ··· ⊗ ⊗ ∀

Ai,j = bi,jIβ . (19) According to Lemma 16, we have Aij = λij Iβ and

7 Bij = λij Iα. Define Θ

λ λ 11 ··· 1t  .  Λ := . ,     A B λs1 λst ∼  ···    equation (20) follows. (2) For each A A we can find A irreducible such ∈ h iℓ 0 that A = A I . To prove it is unique, let B A 0 ⊗ s ∈ h iℓ and B is irreducible and B = B I . We claim Λ 0 0 ⊗ t that A = B . Since A B , there exists Γ such 0 0 0 ∼ℓ 0 Fig. 1. Θ = lcm(A, B) and Λ= gcd(A, B) that A0 =Γ Ip, B0 =Γ Iq. Definition 20 (1) Let A n n. Then a modified ⊗ ⊗ ∈ M × determinant is defined as Since both A0 and B0 are irreducible, we have p = q = 1, which proves the claim. Dt(A) = [ det(A) ]1/n. (25) | | ✷ (2) Consider an equivalence of square matrices A , the h i Remark 18 Theorem 17 is also true for r with obvious “determinant” of A is defined as ∼ h i modification. Dt( A ) = Dt(A), A A . (26) Remark 19 For statement ease, we propose the follow- h i ∈ h i ing terminologies: Proposition 21 (26) is well defined, i.e., it is indepen- dent of the choice of the representative A. (1) If A = B I , then B is called a divisor of A and ⊗ s A is called a multiple of B. Proof. To see (26) is well defined, we need to check that (2) If (21) holds and α, β are co-prime, then the Λ sat- A B implies Dt(A) = Dt(B). Now assume A =Λ Iβ, isfying (20) is called the greatest common divisor of ∼ ⊗ B =Λ Iα and Λ k k, then A and B. Moreover, Λ= gcd(A, B) is unique. ⊗ ∈M × (3) If (21) holds and α, β are co-prime, then Dt(A) = [ det (Λ I ) ]1/kβ = [ det(Λ) ]1/k, | ⊗ β | | |

Θ := A Iα = B Iβ (23) ⊗ ⊗ Dt(B) = [ det (Λ I ) ]1/kα = [ det(Λ) ]1/k . | ⊗ α | | | is called the least common multiple of A and B. It follows that (26) is well defined. ✷ Moreover, Θ = lcm(A, B) is unique. (Refer to Fig. 1.) Remark 22 (1) Intuitively, Dt( A ) defines only the h i (4) Consider an equivalent class A , denote the unique “absolute value” of A . Because if there exists an h i h i irreducible element by A1, which is called the root A A such that det(A) < 0, then det(A I ) > 0. ∈ h i ⊗ 2 element. All the elements in A can be expressed as it is not able to define det( A ) uniquely over the h i h i class. Ai = A1 Ii, i =1, 2, . (24) (2) When det(A) = s, A A , we also use ⊗ ··· ∀ ∈ h i det( A ) = s. But when F = R, only det( A )=1 A is called the i-th element of A . Hence, an equiv- h i h i i h i makes sense. alent class A is a well ordered sequence as: h i Definition 23 (1) Let A n n. Then a modified A = A , A , A , . ∈ M × h i { 1 2 3 ···} trace is defined as

Next, we modify some classical matrix functions to make 1 Tr(A)= trace(A). (27) them available for the equivalence class. n

8 (2) Consider an equivalence of square matrices A , the Definition 26 A is said to possess a property, if every h i h i “trace” of A is defined as A A possesses this property. The property is also said h i ∈ h i to be consistent with the . Tr( A ) = Tr(A), A A . (28) h i ∈ h i In the following some easily verifiable consistent proper- ties are collected. Similar to Definition 20, we need and can easily prove (28) is well defined. These two functions will be used in Proposition 27 (1) Assume A is a square ma- ∈ M the sequel. trix. The following properties are consistent with the matrix equivalence ( ℓ or r): Next, we show that A = A1, A2, has a lattice ∼ ∼ 1 T A is orthogonal, that is A− = A ; h i { ···} • structure. det(A)=1; • trace(A)=0; Definition 24 ([2]) A poset L is a lattice if and only if • A is upper (lower) triangle; for every pair a,b L both sup a,b and inf a,b exist. • ∈ { } { } A is strictly upper (lower) triangle; • A is symmetric (skew-symmetric); Let A, B A . If B is a divisor (multiple) of A, then B • ∈ h i A is diagonal; is said to be proceeding (succeeding) A and denoted by • B A ( B A). Then is a partial order for A . (2) Assume A 2n 2n, n =1, 2, , and ≺ ≻ ≺ h i ∈M × ··· Theorem 25 ( A , ) is a lattice. 0 1 h i ≺ J = (29)   Proof. Assume A, B A . It is enough to prove that 1 0 ∈ h i − the Λ = gcd(A, B) defined in (20) is the inf(A, B), and   The following property is consistent with the matrix the Θ = lcm(A, B) defined in (23) is the sup(A, B). equivalence: To prove Λ = inf(A, B) we assume C A and C B, ≺ ≺ J ⋉ A + AT ⋉ J =0. (30) then we need only to prove that C Λ. Since C A ≺ ≺ and C B, there exist I and I such that C I = A ≺ p q ⊗ p Remark 28 As long as a property is consistent with an and C I = B. Now ⊗ q equivalence, then we can say if the equivalent class has the property or not. For instance, because of Proposition C Ip = A =Λ Iβ , 27 we can say A is orthogonal, det( A )=1, etc. ⊗ ⊗ h i h i C Iq = B =Λ Iα. ⊗ ⊗ 2.3 Lattice Structure on Mµ Hence Denote by C I I = Λ I I ⊗ p ⊗ q ⊗ β ⊗ q = Λ Iα Ip. µ := A m n m/n = µ . (31) ⊗ ⊗ M ∈M × It follows that  Then it is clear that we have a partition as βq = αp. Since α and β are co-prime, we have p = mβ and q = nα, = , (32) M Mµ where m,n N. Then we have µ Q+ ∈ ∈[ C I = C I I =Λ I . where Q+ is the set of positive rational numbers. ⊗ p ⊗ m ⊗ β ⊗ β That is Remark 29 To avoid possible confusion, we assume the C I =Λ. fractions in Q+ are all reduced. Hence for each µ Q+, ⊗ m ∈ Hence, C Λ. there are unique integers µy and µx, where µy and µx are ≺ co-prime, such that To prove Θ = sup(A, B) assume D A and D B. µ ≻ ≻ µ = y . Then we can prove D Θ in a similar way. ✷ µ ≻ x

9 Definition 30 (1) Let µ Q , p and q are co-prime Next, we define ∈ + and p/q = µ. Then we denote by µy = p and µx = q i j i j as y and x are components of µ. = ∧ , Mµ ∧Mµ Mµ (2) Denote the spaces of various dimensions in µ as (35) i j i j M = ∨ . Mµ ∨Mµ Mµ i µ := iµy iµx , i =1, 2, . M M × ··· From above discussion, the following result is obvious:

α β Assume A , A , A A , and α β, then Proposition 32 Consider µ. The followings are α ∈Mµ β ∈Mµ α ∼ β | M Aα Ik = Aβ, where k = β/α. One sees easily that we equivalent: ⊗ α β can define an embedding mapping bdk : µ µ as M →M (1) α is a subspace of β; Mµ Mµ (2) α is a factor of β, i.e., α β; bdk(A) := A Ik. (33) | ⊗ (3) α β = α; Mµ ∧Mµ Mµ (4) α β = β. In this way, α can be considered as a subspace of β. Mµ ∨Mµ Mµ Mµ Mµ The order determined by this space-subspace relation is Using the order ⊏ defined by (34), it is clear that all the denoted as fixed dimension vector spaces i , i = 1, 2, form a Mµ ··· lattice. α ⊏ β. (34) Mµ Mµ Proposition 33 ( , ⊏) is a lattice with Mµ

If (34) holds, α is called a divisor of β, and β is sup α, β = α β, Mµ Mµ Mµ µ µ µ∨ called a multiple of α. M M M (36) µ α β α β M inf ,  = ∧ . Mµ Mµ Mµ Denote by i j = gcd(i, j) and i j = lcm(i, j). Using  ∧ ∨ the order of (34), has the following structure. Mµ The following properties are easily verifiable.

i j Theorem 31 (1) Given µ and µ. The greatest Proposition 34 Consider the lattice ( µ, ⊏). i Mj M M common divisor is µ∧ , and the least common mul- i j M 1 tiple is µ∨ . (Please refer to Fig. 2) (1) It has a smallest (root) subspace µ = p q, M i j M M × (2) Assume A B, A µ and B µ. Then their where p, q are co-prime and p/q = µ. That is, ∼ ∈M ∈M i j greatest common divisor Λ = gcd(A, B) ∧ , ∈ Mµ and their least common multiple Θ = lcm(A, B) i 1 1 µ µ = µ, i j ∈ M ∧M M µ∨ . M i 1 = i . Mµ ∨Mµ Mµ M i j k µ∨ ∨ But there is no largest element. (2) The lattice is distributive, i.e., i j M j k Mµ∨ µ∨ i j k i j i k µ µ µ = µ µ µ µ , i j M k M ∧ M ∨M M ∧M ∨ M ∧M Mµ M µ µ i j k  = i j  i k  . Mµ ∨ Mµ ∧Mµ Mµ ∨Mµ ∧ Mµ ∨Mµ i j j k    M ∧ M ∧ is µ µ (3) For any finite set of spaces µ , s = 1, 2, , r. M u ··· There exists a smallest supper-space µ, u = i j k r M M ∧ ∧ i , such that µ ∨s=1 s • Fig. 2. Lattice Structure of Mµ is ⊏ u, s =1, 2, , r; Mµ Mµ ···

10 If then it is easy to see that ϕ : ( A , ) ( , ⊏) is • h i ≺ → Mα is ⊏ v , s =1, 2, , r, still a lattice homomorphism. Mµ Mµ ··· then u v Definition 39 Let (L, ) be a lattice and S L. If µ ⊏ µ. ≺ ⊂ M M (S, ) is also a lattice, it is called a sub-lattice of (L, ). ≺ ≺ Definition 35 ([16]) Let (L, ) and (M, ⊏) be two lat- ≺ Remark 40 Let ϕ : (H, ) (M, ⊏) be an injective tices. ≺ → (i.e., one-to-one) lattice homomorphism. Then ϕ : H → (1) A mapping ϕ : L M is called an order-preserving ϕ(H) is a lattice isomorphism. Hence ϕ(H) is a sub- → mapping, if ℓ ℓ implies ϕ(ℓ ) ⊏ ϕ(ℓ ). lattice of (M, ⊏). If we identify H with ϕ(H), we can 1 ≺ 2 1 2 (2) A mapping ϕ : L M is called a homomorphism, simply say that H is a sub-lattice of M. → and (L, ) and (M, ⊏) are said to be lattice homo- ≺ Definition 41 Let (L, ) and (M, ⊏) be two lattices. morphic, denoted by (L, ) (M, ⊏), if ϕ satisfies ≺ ≺ ≈ The product order := ⊏ defined on the product set the following condition: ⊂ ≺× L M := (ℓ,m) ℓ L,m M ϕ sup(ℓ1,ℓ2) = sup (ϕ(ℓ1), ϕ(ℓ2)) ; (37) × ∈ ∈  is: (ℓ ,m ) (ℓ ,m ) if and only if ℓ ℓ and m ⊏ and 1 1 ⊂ 2 2 1 ≺ 2 1 m2. ϕ inf(ℓ ,ℓ ) = inf (ϕ(ℓ ), ϕ(ℓ )) . (38) 1 2 1 2 Theorem 42 Let (L, ) and (M, ⊏) be two lattices. ≺ Then (L M, ⊏) is also a lattice, called the product (3) A homomorphism ϕ : L M is called an isomor- × ≺× → ⊏ phism, and (L, ) and (M, ⊏) are said to be lattice lattice of (L, ) and (M, ). ≺ ≺ isomorphic, denoted by (L, ) ≅ (M, ⊏), if ϕ is one ≺ Proof. Let (ℓ , m ) and (ℓ , m ) be two elements in L to one and onto. 1 1 2 2 × M. Denote by ℓs = sup(ℓ1, ℓ2) and ms = sup(m1, m2). i Then (ℓs, ms) (ℓj , mj ), j = 1, 2. To see (ℓs, ms) = Assume A µ is irreducible, define ϕ : A µ as ⊃ ∈M h i→M sup ((ℓ , m ), (ℓ , m )) let (ℓ, m) (ℓ , m ), j =1, 2. 1 1 2 2 ⊃ j j ij Then ℓ ℓj and m ⊐ mj , j =1, 2. It follows that ℓ ℓs ϕ(Aj ) := µ , j =1, 2, , (39) ≻ ≻ M ··· and m ⊐ m . That is, (ℓ, m) (ℓ , m ). We conclude s ⊃ s s then it is easy to verify the following result. that (ℓs, ms) = sup ((ℓ1, m1), (ℓ2, m2)) . Proposition 36 The mapping ϕ : A defined h i→Mµ in (39) is a lattice homomorphism from ( A , ) to Similarly, we set ℓ = inf(ℓ , ℓ ) and m = inf(m , m ), h i ≺ i 1 2 i 1 2 ( µ, ⊏). then we can prove that M

Next, we consider the µ for different µ’s. It is also easy (ℓ , m ) = inf ((ℓ , m ), (ℓ , m )) . M i i 1 1 2 2 to verify the following result. ✷

Proposition 37 Define a mapping ϕ : µ λ as M →M Finally, we give an example to show that an order- ϕ i := i . preserving mapping may not be an lattice homomor- Mµ Mλ phism.  The mapping ϕ : ( , ⊏) ( , ⊏) is a lattice iso- Mµ → Mλ morphism. Example 43 Consider the product of two lattices ( µ, ⊏) and ( λ, ⊏). Define a mapping Example 38 According to Proposition 37, if we still as- M M i sume A and replace µ in (39) by any α Q , that ϕ : ( µ, ⊏) ( λ, ⊏) ( µλ, ⊏) ∈Mµ ∈ + M × M → M is, define as ϕ(A ) := ij , j =1, 2, , ϕ p q := pq . j Mα ··· Mµ ×Mλ Mµλ 

11 Assume i ⊏ j and s ⊏ t , then i j and s t, Proposition 44 The algebraic system ( , ⋉) is a Mµ Mµ Mλ Mλ | | M and by the definition of product lattice, we have monoid.

i s ⊏ ⊏ j t . Proof. The associativity comes from the property of ⋉ Mµ ×Mλ × Mµ ×Mλ (refer to (5)). The is 1. ✷

Since is jt, we have | One sees easily that this monoid covers numbers, vectors, and matrices of arbitrary dimensions. i s is ϕ µ = , M ×Mλ Mµλ (40) In the following some of its useful sub- are pre- ⊏ jt = ϕ  j t . Mµλ Mµ ×Mλ sented:  That is, ϕ is an order-preserving mapping. (k): •M Consider two elements in product lattice as α = p (k) := kα kβ , µ M M × s q t M × α N β N λ and β = µ λ. Following the same arguments [∈ [∈ M M ×M where k N and k> 1. in the proof of Theorem 42, one sees easily that ∈ It is obvious that (k) < . (In this section A< M M p q s t B means A is a submonoid of B). This sub-monoid lcm(α, β)= ∨ ∨ , Mµ ×Mλ is useful for calculating the product of tensors over k p q s t gcd(α, β)= ∧ ∧ . dimensional vector space [1]. It is particularly useful Mµ ×Mλ for k-valued logical dynamic systems [6], [7]. When Then k = 2, it is used for Boolean dynamic systems. (p q)(s t) ϕ(lcm(α, β)) = ∨ ∨ , In this sub-monoid the STP can be defined as fol- Mµλ (p q)(s t) lows: ϕ(gcd(α, β)) = ∧ ∧ . Mµλ Consider Definition 45 (1) Let X Fn be a column vector, Y ps ∈ ∈ ϕ(α)= , Fm a row vector. Mµλ qt Assume n = pm (denoted by X p Y ): Split X ϕ(β)= µλ. • ≻ M into m equal blocks as Now (ps) (qt) T T T T lcm(ϕ(α), ϕ(β)) = µλ ∨ , X = X ,X , ,X , M 1 2 ··· m (ps) (qt) ∧ gcd(ϕ(α), ϕ(β)) = µλ . p   M where Xi F , i. Define It is obvious that in general ∈ ∀ m p (p q)(s t) = (ps) (qt), X ⋉ Y := Xsys F . ∈ ∨ ∨ 6 ∨ s=1 X as well as, Assume np = m (denoted by X Y ): Split Y • ≺p (p q)(s t) = (ps) (qt). into n equal blocks as ∧ ∧ 6 ∧ Y = [Y , Y , , Y ] , Hence, ϕ is not a homomorphism. 1 2 ··· n where Y Fp, i. Define 2.4 Monoid and Quotient Monoid i ∈ ∀ m A monoid is a with identity. We refer readers X ⋉ Y := x Y Fp. s s ∈ to [28], [25], [19] for concepts and some basic properties. s=1 X Recall that (2) Assume A m n, B p q, where n = tp ∈ M × ∈ M × := m n. (denoted by A t B), or nt = p (denoted by A t M M × ≻ ≺ m N n N B). Then [∈ [∈ We have the following algebraic structure. A ⋉ B := C = (ci,j ) ,

12 where Proposition 47 ci,j = Rowi(A) ⋉ Colj (B). Ξr <Ξ< . (41) Remark 46 (1) It is easy to prove that when A B M ≺t or B A for some t N, this definition of left STP ≺t ∈ coincides with Definition 1. Though this definition Proof. Assume A m n, B p q and A, B Ξ, is not as general as Definition 1, it has clear physical ∈ M × ∈ M × ∈ then m n and p q. Let t = n p. Then AB meaning. Particularly, so far this definition covers ≤ ≤ mt ∨tq ∈ mt tq . It is easy to see that n p , so AB Ξ. almost all the applications. M n × p ≤ ∈ The second part is proved. (2) Unfortunately, this definition is not suitable for right STP. This is a big difference between left and As for the first part, Assume A, B Ξr. Then right STPs. ∈

: (AB) = rank A I B I • V ⊗ t/n ⊗ t/p := k 1. V M × rank A I  B I  k N t/n t/p [∈ ≥ ⊗ ⊗ It is obvious that < . T T 1 V M B (BB )−  It/p  This sub-monoid consists of column vectors. In this ⊗ = rank A I I I sub-monoid the STP is degenerated to Kronecker ⊗ t/n p ⊗ t/p product. = rank A I = mt/n.  ⊗ t/n We denote by T the sub-monoid of row vectors. It is  V Hence, AB Ξr. ✷ also clear that T < . ∈ V M : Similarly, we can define the set of “tall” matrices Π and • L c := A Col(A) ∆ , s N . the set of matrices with full column rank Π . We can L { ∈ M | ⊂ s ∈ } It is obvious that < . This sub-monoid consists also prove that L M of all logical matrices. It is used to express the product Πc < Π < . (42) of logical mappings. M

: • P Next, we consider the quotient space := A Col(A) Υ , for some s N . P { ∈ M | ⊂ s ∈ } Σ := / . It is obvious that < . This monoid is useful for M M ∼ P M probabilistic logical mappings. Definition 48 ([41]) (1) A nonempty set S with a bi- (k): nary operation σ : S S S is called an algebraic • L × → (k) := (k). system. L L∩M It is obvious that (k) < < . We use it for (2) Assume is an equivalence relation on an algebraic L L M ∼ k-valued logical mappings. system (S, σ). The equivalence relation is a congru- ence relation, if for any A,B,C,D S, A C and ∈ ∼ B D, we have Next, we define the set of “short” matrices as ∼

Ξ := A m n m n , AσB CσD. (43) { ∈M × | ≤ } ∼ and its subset Proposition 49 Consider the algebraic system ( , ⋉) M with the equivalence relation = . The equivalence re- Ξr := A A is of full row rank . ∼ ∼ℓ { ∈ M | } lation is congruence. ∼

Then we have the following result. Proof. Let A A˜ and B B˜. According to Theorem ∼ ∼

13 17, there exist U m n and V p q such that ✷ ∈M × ∈M ×

A = U I , A˜ = U I ; Definition 52 (1) Define ⊗ s ⊗ t ˜ µ B = V Iα, B = V Iβ. := z . ⊗ ⊗ M Mµ z Z [∈ Denote Then µ is closed under operator ⋉. M n p = r, ns αp = rξ, nt βp = rη. (2) Set ∨ ∨ ∨ Σµz = µz / , Then M ∼ and define µ Σ := Σµz . A ⋉ B = U Is I V Iα I rξ/ns rξ/αp z Z ⊗ ⊗ ⊗ ⊗ ∈ µ [ = U Ir/n V Ir/p  Iξ.  Then Σ is also closed under operator ⋉. ⊗ ⊗ ⊗ (3) A , B Σµ is said to be power equivalent, de-    h i h i ∈ Similarly, we have noted by A B , if there exists an integer z Z h i∼p h i ∈ such that both A , B Σµz . Denote A˜ ⋉ B˜ = U I V I I . h i h i ∈ ⊗ r/n ⊗ r/p ⊗ η    A := B B p A (44) Hence we have A ⋉ B A˜ ⋉ B˜. ✷ hh ii {h i | h i∼ h i} ∼ Remark 53 It is obvious that ⋉ is consistent with . ∼p According to Proposition 49, we know that ⋉ is well de- Hence ⋉ is well defined on the set of equivalent classes as fined on the quotient space Σ . Moreover, the following M result is obvious: A ⋉ B := A ⋉ B . (45) hh ii hh ii hh ii Proposition 50 (1) (Σ , ⋉) is a monoid. M (2) Let < be a sub-monoid. Then / is a sub- S M S ∼ Then we have the following group structure. monoid of Σ , that is, M µ Theorem 54 (Σ / p, ⋉) is a group, which is isomor- / < Σ . ∼ phic to (Z, +). Precisely, assume A z then Ψ: S ∼ M ∈ Mµ Σµ/ Z is defined as ∼p→ Since the S in Proposition 50 could be any sub-monoid Ψ ( A ) := z, of . All the aforementioned sub-monoids have their hh ii M corresponding quotient sub-monoids, which are the sub- which is a group isomorphism. monoids of Σ . For instance, / , / , etc. are the M V ∼ L ∼ sub-monoids of Σ . M 2.6 Semi-tensor Addition and Vector Space Structure of Σ 2.5 Group Structure on µ µ M Proposition 51 Assume A and B then Definition 55 Let A, B µ. Precisely, A m n, ∈Mµ1 ∈Mµ2 ∈M ∈M × A ⋉ B . That is, the operation ⋉ is a mapping B p q, and m/n = p/q = µ. Set t = m p. Then ∈Mµ1µ2 ∈M × ∨ (1) the left semi-tensor addition (STA) of A and B, ⋉ : µ µ µ µ . 1 2 1 2 ± M ×M →M denote by , is defined as ± Proof. Assume A m n and B p q, where µ1 = A B := A It/m + B It/p . (46) ∈M × ∈M × ⊗ ⊗ m/n and µ = p/q, and t = n p. Then 2   ∨ Correspondingly, the left semi-tensor subtraction A ⋉ B = A I B I (STS) is defined as ⊗ t/n ⊗ t/p ± mt/n qt/p µ1µ2 . A B := A ( B). (47) ∈ M × ⊂M ⊢ −

14 ˜ ± ˜ ± ✷

(2) The right STA of A and B, denote by ± , is defined as (52) and (53) imply that A B A B. ∼

ℓ r A ± B := It/m A + It/p B . (48) Define the left and right quotient spaces Σ and Σ re- ⊗ ⊗ µ µ   spectively as Correspondingly, the right STS is defined as Σℓ := / ; (54) µ Mµ ∼ℓ A B := A ± ( B). (49) r ⊣ − Σ := µ/ r . (55) µ M ∼ ±

Remark 56 Let σ , , ± , be one of the four bi- ∈ { ⊢ ⊣}

nary operators. Then it is easy to verify that ± According to Theorem 57, the operation (or ) can be ℓ ⊢ (1) if A, B , then AσB ; extended to Σµ as ∈Mµ ∈Mµ (2) If A and B are as in Definition 55, then AσB

∈ ± ± t nt ; A B :=< A B > , M × m ℓ ℓ ℓ (3) Set s = n q, then s/n = t/m and s/q = t/p. h i h i (56) ∨ A B :=< A B > , A , B Σℓ . So σ can also be defined by using column numbers h iℓ ⊢ h iℓ ⊢ ℓ h iℓ h iℓ ∈ µ respectively, e.g.,

± r

A B := A I + B I , Similarly, we can define ± (or ) on the quotient spaceΣ ⊗ s/n ⊗ s/q ⊣ µ as   etc.

A r ± B r :=< A ± B >r, Theorem 57 Consider the algebraic system ( µ, σ), h i h i (57)

± M r A B :=< A B >r, A , B Σµ. where σ , and = (or σ ± , and = ). r r r r ∈{ ⊢} ∼ ∼ℓ ∈{ ⊣} ∼ ∼r h i ⊣ h i ⊣ h i h i ∈ Then the equivalence relation is a congruence relation ∼ with respect to σ. The following result is important, and the verification is Proof. We prove one case, where σ = ± and = . straightforward. ∼ ∼ℓ Proofs for other cases are similar. Theorem 58 Using the definitions in (56) (correspond- ˜ ˜ ˜ ℓ ± Assume A ℓ A and B ℓ B. Set P = gcd(A, A) and ingly, (57)), the quotient space Σ , (correspondingly, ∼ ∼ µ Q = gcd(B,B˜ ), then r Σ , ± ) is a vector space. µ 

 ± A˜ = P Iβ, A = P Iα; (50) ℓ r ⊗ ⊗ Remark 59 As a consequence, Σµ, (or Σµ, ± ) is B˜ = Q Iγ , B = Q Iδ, (51) an . ⊗ ⊗   where P xµ x, Q yµ y, x, y N are certain Remark 60 Recall Example 11, it shows that the expo- ∈ M × ∈ M × ∈ numbers. nential function exp is well defined on the quotient space Σ := Σ .

± 1 Now consider A˜ B˜. Assume η = x y, t = xβ yγ = ηξ, ∨ ∨ s = xα yδ = ηζ. Then we have Since each A Σ has a unique left (or right) irreducible ∨ h i ∈ element A1 (or A˜1) such that A ℓ A1 (or A r A˜1), ˜ ± ˜ ∼ ∼ A B = P Iβ It/xβ in general, we can use the irreducible element, which is ⊗ ⊗ also called the root element of an equivalent class, as the +Q Iγ It/yγ (52) ⊗ ⊗ representation of this class. But this is not compulsory. = (P I ) + (Q I I . ⊗ η/x ⊗ η/y) ⊗ ξ   For notational and statement ease, hereafter we consider Similarly, we have ℓ Σµ only unless elsewhere stated. As a convention, the omitted script ( “ℓ” or“r ”) means ℓ. For instance, Σ =

± µ A B = (P Iη/x) + (Q Iη/y) Iζ . (53) Σℓ , = , A = A etc. ⊗ ⊗ ⊗ µ ∼ ∼ℓ h i h iℓ  

15 i 3 Topology on M-equivalence Space Definition 62 (1) Consider µ as an Euclidean 2 M space Ri µy µx with conventional Euclidean topol- 3.1 Topology via Sub-basis ogy. Assume o = is an open set. Define a subset i 6 ∅ s (o ) Σ as follows: i i ⊂ µ This subsection builds step by step a topology on quo- A s (o ) A o = . (60) tient space Σµ using a sub-basis. h i ∈ i i ⇔ h i ∩ i 6 ∅

First, we consider the partition (32), it is natural to as- (2) Let sume that each µ is a clopen subset in , because M M i distinct µ’s correspond to distinct shapes of matrices. Oi = oi oi is an open ball in µ { | M Now inside each we assume µ , µ N are co-prime Mµ y x ∈ with rational center and rational radius . and µy/µx = µ. Then }

Σµ (3) Using Oi, we construct a set of subsets Si 2 as ∞ ⊂ = i , Mµ Mµ i=1 S := s s = s (o ) for some o O , [ i { i | i i i i ∈ i} where i =1, 2, . i ··· µ = iµy iµx , i =1, 2, . M M × ··· i Taking S = i∞=1Si as a topological sub-basis, the Because of the similar reason, we also assume each µ ∪ M topology generated by S is denoted by , which is clopen. T makes Overall, we have a set structure on as (Σµ, ) M T a topological space. (We refer to [29] for a topology ∞ produced from a sub-basis.) = i . (58) M Mµ Note that the topological basis consists of the set of µ Q+ i=1 ∈[ [ finite intersections of s S , i =1, ,t, t< . i ∈ i ··· ∞ Definition 61 A natural topology on , denoted by M , consists of s1 s2 TM ∩ O (1) a partition of countable clopen subsets i , µ Q , 1 Mµ ∈ + i N; ∈ 2 (2) the conventional Euclidean Ri µy µx topology for O2 i . Mµ Then becomes a topological space. Moreover, it is M Fig. 3. si ∩ sj : An Element In Topological Basis obvious that ( , ) is a second countable Hausdorff M TM space. Remark 63 (1) It is clear that makes (Σµ, ) a T T topological space. Next, we consider the quotient space (2) The topological basis is

Σ := / . := si si si si Si ; M M ∼ B 1 ∩ 2 ∩···∩ r | j ∈ j (61) It is clear that j =1, , r; r< . ··· ∞}

Σ = Σµ. (59) (3) Fig. 3 depicts an element in the topological basis. M i j µ Q+ Here o1 , o2 are two open discs with ∈[ ∈ Mµ ∈ Mµ rational center and rational radius. Then s1(o1) and Moreover, (59) is also a partition. Hence each Σµ can be s2(o2) are two elements in the sub-basis, and considered as a clopen subset in Σ . We are, therefore, M interested only in constructing a topology on each Σ . s s = A A o = , i =1, 2 µ 1 ∩ 2 h i h i ∩ i 6 ∅ 

16 is an element in the basis. Mµ 2β B2 Mµ B1 Theorem 64 The topological space (Σ , ) is a second M β µ T µ countable, Hausdorff (or T2) space. Pr 2α A2 Mµ Proof. Tosee(Σ , ) is secondcountable,It is easyto see A1 µ α T Mµ that O is countable. Then O i = 1, 2, , as count- i { i| ···} able union of countable set, is countable. Finally, , as 1 B Mµ the finite subset of a countable set, is countable. Σµ A B h i h i Next, consider A = B Σ . Let A A and h i 6 h i ∈ µ 1 ∈ h i B B be their irreducible elements respectively. If Fig. 4. Fiber Bundle Structure 1 ∈ h i A , B i for the same i, then we can find two open 1 1 ∈Mµ Remark 67 (1) Of course, ( , P r, Σ ) is also a bun- sets = o , o i , o o = , such that A o M M ∅ 6 a b ⊂ Mµ a ∩ b ∅ 1 ∈ a dle. But it is of less interest because it is a discrete and B1 ob. Then by definition, sa(oa) sb(ob)= and union of ( , P r, Σ ), µ Q . ∈ ∩ ∅ Mµ µ ∈ + A sa, B sb. (2) Consider an equivalent class A = A , A , h i ∈ h i ∈ h i { 1 2 ···}∈ Σ , where A is irreducible. Then the fiber over A Finally, assume A i , B j and i = j. Let µ 1 h i 1 ∈ Mµ 1 ∈ Mµ 6 is a discrete set: t = i j. Then ∨ 1 t t P r− ( A )= A1, A2, A3, . At/i = A1 It/i µ, Bt/j = B1 It/j µ. h i { ···} ⊗ ∈M ⊗ ∈M Since A = B , we can find o , o t , o o = Hence this fiber bundle is named discrete bundle. t/i 6 t/j a b ⊂Mµ a ∩ b ∅ and A o and B o . That is, s (o ) and s (o ) (3) Fig. 4 illustrates the fiber bundle structure of t/i ∈ a t/j ∈ b a a b b ✷ ( µ, P r, Σµ). Here A1 A and B1 B are separate A and B . M ∈ h i ∈ h i h i h i irreducible A α and B β. Their fibers 1 ∈ Mµ 1 ∈ Mµ If we consider are depicted in Fig. 4.

∞ ∞ We can define a set of cross sections [27] ci :Σµ µ := i j (62) M M × →M i=1 j=1 as: Y Y as a product topological space, then is the quotient T ci( A ) := Ai, i =1, 2, . (63) topology of the standard product topology on the prod- h i ··· uct space defined by (62). (We refer to [48] for prod- It is clear that P r c =1 , where 1 is the identity M ◦ i Σµ Σµ uct topology.) mapping on Σµ.

3.2 Bundle Structure on µ Next, we consider some truncated sub-bundles of M ( µ, P r, Σµ). Definition 65 ([27]) A bundle is a triple (E,p,B), M where E and B are two topological spaces and p : E B Definition 68 Assume k N. → is a continuous map. E and B are called the total space ∈ 1 and base space respectively. For each b B, p− (b) is (1) Set ∈ called the fiber of the bundle at b B. ∈ [ ,k] µ· := Mm n Mm n µ, and m kµy . Observing the two topologies and constructed in M { × | × ∈M | } TM T (64) previous subsection, the following result is obvious: [ ,k] Then µ· is called the k-upper bounded subspace Proposition 66 ( µ, P r, Σµ), is a bundle, where P r M M of µ. is the natural projection, i.e., [ M,k] [ ,k] (2) Σµ· := µ· / is called the k-upper bounded M ∼ P r(A)= A . subspace of Σµ. h i

17 [ ,k] [ ,k] The natural projection P r : µ· Σµ· is obviously 3.3 Coordinate Frame on Σµ M → defined. Then we have the following bundle structure. ± It is obvious that (Σµ, ) is an infinite dimensional vec- [ ,k] [ ,k] Proposition 69 µ· , P r, Σµ· is a sub-bundle of tor space. Since each A Σ has finite coordinate ex- M h i ∈ µ ( µ, P r, Σµ). Precisely speaking, the following graph pression, we may try to avoid using a basis with infinite M (65) is commutative, where π and π˜ are including map- elements. To this end, we construct a set of “consistent” pings. (65) is also called the bundle morphism. coordinate frames as O , O , . Then A can be ex- { 1 2 ···} h i pressed by A1 Span Oi , A2 Span Oi+1 , and so [ ,k] π ∈ { } ∈ { } µ· on. Moreover, Oi Oi+1 is a subset (or Oi is part of co- M −−−−→ Mµ ⊂ ordinate elements in O ). Then it seems that A can P r P r (65) i+1 h i ↓ ↓ always be expressed in Oi no matter which representa- [ ,k] π′ Σµ· Σ tive is chosen. The purpose of this section is to build such −−−−→ µ a set of consistent coordinate sub-frames, which forms Remark 70 (1) In a truncated sub-bundle there is a an overall coordinate frame. maximum cross section kp kq and a minimum × M α β cross section (i.e., root leaf) p q, where p, q are Assume Aα , Aβ , Aα Aβ, and α β, then M × ∈Mµ ∈Mµ ∼ | co-prime and p/q = µ. A I = A , where k = β/α. Recall that the order α ⊗ k β (2) Let rp rq, r = i1, ,it be a finite set of cross determined by this space-subspace relation is denoted as M × ··· sections of ( µ, P r, Σµ). Set k = i1 i2 it, M ∨ [ ∨···∨,k] α β then there exists a smallest truncation Σ · , which ⊏ . (66) µ Mµ Mµ contains rp rq, r = i1, ,it as its cross sec- M × ··· tions. One sees easily that we can define an embedding map- Definition 71 (1) Define ping bd : α β as k Mµ →Mµ [k, ] s µ · := M µ k s , bd (A) := A I . (67) M ∈M | k ⊗ k  which is called the k-lower bounded subspace of µ. M In this way, α can be considered as a subspace of β. (2) Define the quotient space Mµ Mµ

[k, ] [k, ] In the following we construct a proper coordinate frame Σ · := · / , µ Mµ ∼ β α on µ, which makes µ its coordinate subspace, that Mα M which is called the k-lower bounded subspace of Σµ. is, µ is generated by part of coordinate variables of βM (3) Assume α β. Define µ. To this end, we build a set of on | Mβ µ as follows: [α,β] [α, ] [ ,β] M := · · , Mµ Mµ Mµ Assume p = µ and q = µ . Splitting C β into \ y x ∈ Mµ which is called the [α, β]- bounded subspace of µ. αp αq blocks, where each block is of dimension k k, M × × (4) Define the quotient space yields [α,β] [α,β] C1,1 C1,2 C1,αq Σµ := µ / , ··· M ∼  .  C = . . which is called the [α, β]-bounded subspace of Σ . µ    αp,1 αp,2 αp,αq C C C  Remark 72 Proposition 69 is also true for the two  ···  I,J  [k, ] [k, ] Then for each C k k we construct a basis, which other truncated sub-bundles: µ · , P r, Σµ · and ∈M × M consists of three classes: [α,β] [α,β] µ , P r, Σµ respectively. Precisely speaking, in M [ ,k] [ ,k] [k, ] (65) if both µ· and Σµ· are replaced by µ · and Class 1: [k, ] M [α,β] [α,β] M • Σµ · respectively, or by µ and Σµ respectively, M I,J (65) remains commutative. ∆ = (bu,v) k k, i = j, (68) i,j ∈M × 6

18 where then B is an orthonormal basis for β. Mµ 1, u = i, v = j (3) Set bu,v = (0, otherwise. D := DI,J I =1, 2, , αp; J =1, 2, , αq , That is, for (I,J)-th block, at each given non-diagonal | ··· ··· position (i, j), set it to be 1, and all other entries to  α then D is an orthonormal basis for subspace µ be 0. β M ⊂ µ. Class 2: M • Example 74 Consider 2 ⊏ 4 . For any A 1 I,J 1/2 1/2 I,J 4 M M ∈ D := Ik . (69) , we split A as √k M1/2

I,J 1 That is, at each (I,J)-th block, set D = I as a 1,1 1,2 1,3 1,4 √k k A A A A basis element. A = .  2,1 2,2 2,3 2,4 Class 3: A A A A •   Then we build the orthonormal basis block-wise as EI,J = 1 diag 1, , 1, (t 1), 0, , 0 , t √t(t 1)  ··· − − ···  − 0 1 0 0 t 1 k t I,J I,J I,J  − −  B := ∆ = , ∆ = ,    1,2   2,1   t =2, , k.| {z } | {z } 0 0 1 0 ···  (70)      1 0 1 0 DI,J = 1 , EI,J = 1 √2 2 √2  I,J 0 1 0 1 That is, set Et as the rest of basis elements of the di- −  agonal subspace of (I,J)-th block, which are orthog-      onal to DI,J . The orthonormal basis as proposed in Proposition 73 is B = BI,J I =1, 2; J =1, 2, 3, 4 . Let A, B m n. Recall that the Frobenius inner | ∈ M × product is defined as [24]  Assume A β and C α and αk = β. Using (73), m n ∈Mµ ∈Mµ (A B) := a b . (71) matrix A can be expressed as | F i,j i,j i=1 j=1 X X k Correspondingly, the Frobenius norm is defined as A = cI,J ∆I,J + d DI,J + eI,J EI,J .  i,j i,j I,J t t  I J i=j t=2 X X X6 X A := (A A) . (72)   k kF | F p When α is merged into β as a subspace, matrix C If (A B)F = 0, then A is said to be orthogonal with B. Mµ Mµ | can be expressed as Using Frobenius inner product, it is easy to verify the I,J following result. C = dI,J′ D . XI XJ Proposition 73 (1) Set Definition 75 Assume C = (c ) α and β = kα. i,j ∈ Mµ The embedding mapping bdk : C C Ik is defined as I,J I,J I,J 7→ ⊗ B := ∆i,j , 1 i = j k; D ; ≤ 6 ≤ (73) I,J √ I,J nE , t =2, , k , bdk(C)I,J = kcI,J D , t ··· (74) o I =1, , αµy; J =1, , αµx. then BI,J is an orthonormal basis for (I,J)-th block. ··· ··· (2) Set To be consistent with this, we define the projection as B := BI,J I =1, 2, , αp; J =1, 2, , αq , follows: | ··· ··· 

19 Definition 76 Assume A = (AI,J ) β, where β = Definition 81 Let A , B Σ . Their inner product ∈ Mµ h i h i ∈ µ I,J I,J is defined as kα and A = Ai,j k k. The projection prk : ∈ M × β α is defined  as Mµ →Mµ ( A B ) := (A B) . (79) h i | h i | W pr (A)= C = (c ) α, (75) k I,J ∈Mµ The following proposition shows that (79) is well defined. where k 1 c = AI,J . Proposition 82 Definition 81 is well defined. That is, I,J k d,d d=1 (79) is independent of the choice of the representatives X A and B. According to the above construction, it is easy to verify the following: Proof. Assume A1 A and B1 B are irreducible. ∈ h i ∈ h i Then it is enough to prove that Proposition 77 The composed mapping pr bd is an k ◦ k identity mapping. Precisely, (A B)W = (A1 B1)W , A A , B B . (80) | | ∈ h i ∈ h i

α α β pr bd (C)= C, C . (76) Assume A1 µ and B1 µ. Let k ◦ k ∀ ∈Mµ ∈M ∈M

A = A I αξ, 1 ⊗ ξ ∈Mµ 3.4 Inner Product on Σµ B = B I βη. 1 ⊗ η ∈Mµ Let A = (ai,j ) , B = (bi,j ) m n. It is well known ∈ M × Denote by t = α β, s = αξ βη, and s = tℓ. Using that the Frobenius inner product of A and B is defined ∨ ∨ (78), we have by (71), and the Frobenius norm is defined by (72).

1 (A B) = A I s B I s The following lemma comes from a straightforward com- W s αξ βη | ⊗ | ⊗ F putation. 1   = A I s B I s s 1 α 1 β ⊗ | ⊗ F 1   Lemma 78 Let A, B m n. Then = tℓ A1 I t Iℓ B1 I t Iℓ ∈M × ⊗ α ⊗ | ⊗ β ⊗ F 1   t t = t A1 I B1 I (A Ik B Ik)F = k(A B)F . (77) ⊗ α | ⊗ β F ⊗ | ⊗ |   = (A1 B1)W . Definition 79 Let A, B , where A α and | ∈ Mµ ∈ Mµ B β. Then the weighted inner product of A, B is ✷ ∈ Mµ defined as Definition 83 ([44]) A real or complex vector space X 1 (A B) := A I B I , (78) is an inner-product space, if there is a mapping X X W t/α t/β F × → | t ⊗ | ⊗ R (or C), denoted by (x y), satisfying  | where t = α β is the least common multiple of α and β. ∨ (1) Using Lemma 78 and Definition 79, we have the following (x + y z) = (x z) + (y z), x,y,z X. property. | | | ∈ (2) Proposition 80 Let A, B µ, if A and B are or- ∈ M (x y)= (y x), thogonal, i.e., (A B)F =0, then A Iξ and B Iξ are | | | ⊗ ⊗ also orthogonal. where the bar stands for . (3) Now we are ready to define the inner product on Σ . (ax y)= a(x y), a R (or C). µ | | ∈

20 (4) Example 87 Define a sequence of elements, denoted as 1 (x x) 0, and (x x) =0 if x =0. A k =1, 2, , as follows: A1 is arbitrary. | ≥ | 6 6 h ik ··· ∈Mµ Define A inductively as  k

By definition it is easy to verify the following result. k A = A I + E 2 , k =1, 2, , k+1 k ⊗ 2 k+1 ∈Mµ ··· ±

Theorem 84 The vector space (Σµ, ) with the inner s−1 where E = es 2 (s 2) is defined as product defined by (79) is an . s i,j ∈Mµ ≥  1 , i =1, j =2, Then the norm of A Σµ is defined naturally as: s 2s h i ∈ ei,j = (0, Otherwise. A := ( A A ). (81) k h ik h i | h i p First, we claim that A := A k =1, 2, is a h ik h ki ··· Cauchy sequence. Let n>m. Then The following is some standard results for inner product  space. A A kh im ⊢ h ink Theorem 85 Assume A , B Σµ. Then we have the (86) h i h i ∈ A m A m+1 + + A n 1 A n following ≤ h i ⊢ h i ··· h i − ⊢ h i 1 + + 1 1 . ≤ 2m+1 ··· 2n ≤ 2m (1) (Schwarz Inequality) Then we prove by contradiction that it does not converge ( A B ) A B ; (82) | h i | h i |≤kh ikkh ik to any element. Assume it converges to A0 , it is enough h i to consider the following three cases: (2) (Triangular Inequality) s Case 1, assume A 2 and A = A . Then ± 0 µ 0 s+1 A B A + B ; (83) • ∈M k h i h ik≤kh ik k h ik 1 A A = A A = . (3) (Parallelogram Law) kh 0i ⊢ h s+2ik kh s+1i ⊢ h s+2ik 2s+2 Similar to (86) we can prove that A ± B 2 + A B 2 k h i h ik k h i ⊢ h ik (84) 1 2 2 =2 A +2 B . A0 At > s+2 , t>s +2. k h ik k h ik kh i ⊢ h ik 2

Hence the sequence can not converge to A0 . Note that the above properties show that Σµ is a normed s h i Case 2, assume A 2 and A = A . Note that space. • 0 ∈Mµ 0 6 s+1 A A is orthogonal to E , then it is clear that 0 ⊢ s+1 s+2 Finally, we present the generalized Pythagorean theo- A A > A A . rem: kh 0 ⊢ s+2ik kh 0 ⊢ s+1ik Note that by construction we have that as long as t ≥ Theorem 86 Let A i Σµ, i = 1, 2, ,n be an or- s +2 then A A and A A are orthogonal. h i ∈ ··· t − s+1 0 − s+1 thogonal set. Then Using generalized Pythagorean theory, we have ± ± ± A A A 2 A A k h i1 h i2 ··· h in k (85) kh 0 ⊢ tik

2 2 2 ± 2 = A + A + + An . = [ A A A A ] k h i1 k k h i2 k ··· k k h 0 ⊢ s+1i h s+1 ⊢ ti q 2 2 = [ A A ] + [ A A ] h 0 ⊢ s+1i h s+1 ⊢ ti q A natural question is: “Is Σµ a ?” Unfor- > A0 As+1 > 0, t s +2. tunately, this is not true. This fact is shown in the fol- k h ⊢ ik ≥ lowing counter-example. Hence the sequence can not converge to A . h 0i

21 s Case 3, A 2 ξ, where ξ > 1 is odd. Correspond- where A s ξi , i =1, , r. For each A , we can • 0 ∈Mµ i ∈ i ⊂Mµ ··· i ing to Case 1, we assume A = A I . Then we find i ( A ) s , where ǫ > 0, i =1, , r. Choosing 0 s+1 ⊗ ξ Bǫi h ii ⊂ i i ··· have δi > 0 small enough such that

i i A A B ( Ai ) ( Ai ) si, i =1, , r. kh 0 ⊢ s+2ik δi h i ⊂Bǫi h i ⊂ ··· = A I (A I + E ) I kh s+1 ⊗ ξ ⊢ s+1 ⊗ 2 s+2 ⊗ ξik Then we have = E ) I kh s+2 ⊗ ξik r 1 i = . q Bδi ( Ai ) d. 2s+2 ∈ h i ∈ T i=1 \ and 1 That is, V d. Hence, d. A A > , t>s +2. ∈ T T ⊂ T kh 0 ⊢ tik 2s+2 ✷ So the sequence can not converge to A . We conclude that d = . h 0i T T Corresponding to Case 2, assume A = A I . 0 6 s+1 ⊗ ξ Using Proposition 80, a similar argument shows that Definition 90 ([15]) (1) A topological space is regular the sequence cannot converge to A too. (or T3) if for each closed set X and x X there h 0i 6∈ exist open neighborhoods Ux of x and UX of X, such that U U = . 3.5 Σµ as a Matric Space x ∩ X ∅ (2) A topological space is normal (or T4) if for each pair Using the norm defined in previous section one sees easily of closed sets X and Y there exist open neighbor- hoods U of X and U of Y , such that U U = . that Σµ is a matric space: X Y X ∩ Y ∅

Theorem 88 Σµ with distance Since a matric space is regular and normal, as a corollary of Theorem 89, we have the following result. d( A , B ) := A B , A , B Σ (87) h i h i k h i ⊢ h ik h i h i ∈ µ Corollary 91 The topological space (Σ , ), defined in µ T is a matric space. Definition 62, is both regular and normal.

Theorem 89 Consider Σµ. The topology deduced by the Note that distance d, denoted by is exactly the same as the topol- Td T4 T3 T2. ogy defined in Definition 62. ⇒ ⇒ T Proof. Assume U and p U. Then there exists a Finally, we show some properties of Σµ. ∈ Td ∈ ball Bǫ(p) such that Bǫ(p) U, where ǫ > 0. Assume s ⊂ Proposition 92 Σµ is convex. Hence it is arcwise con- p = A0 and A0 µ = sµy sµx . h i ∈M M × nected. Now we can construct a ball (A ) s , where δ > 0. Bδ 0 ⊂Mµ Note that (A ) is a sub-basis element of , and hence Proof. Assume A , B Σµ. Then it is clear that Bδ 0 T h i h i ∈ is an open set in (Σµ, ). By continuity, as δ > 0 small T ± ± enough, q (A ) implies d(q, A ) <ǫ. That is, λ A (1 λ) B = λA (1 λ)B Σµ, λ [0, 1]. ∈Bδ 0 0 h i − h i h − i ∈ ∈

δ ( A0 ) Bǫ(p) U, So Σµ is convex. Let λ go from 1 to 0, we have a path B h i ⊂ ⊂ connecting A and B . ✷ h i h i which means U . Hence, . ∈ T Td ⊂ T Proposition 93 Σµ and Σ1/µ are isometric spaces. Conversely, assume q V . Then there exists a basic ∈ ∈ T open set s s such that q s s . Proof. Consider the transpose: 1 ∩···∩ r ∈ T ∈ 1 ∩···∩ r ∈ T Express q = A , A , , A , A AT . { 1 2 ··· r ···} h i 7→

22 Then it is obvious that where Ai,j k k, i =1, , pα; j =1, , qα. set ∈M × ··· ··· T T d( A , B )= d A , B . C := argminx Σα A It/β X It/α . (92) h i h i ∈ µ ⊗ − ⊗  Hence the transpose is an isometry. Moreover, its inverse Then it is easy to calculate that is itself. ✷ 1 c = trace(A ), i =1, , pα; j =1, , qα. i,j k i,j ··· ··· 3.6 Subspaces of Σµ (93)

[ ,k] We conclude that Consider the k-upper bounded subspace Σµ· . We have

[ ,α] [ ,k] Proposition 96 Let A Σµ, PF : Σµ Σµ· . Pre- Proposition 94 Σµ· is a Hilbert space. h i ∈ → cisely speaking, A Σβ. Using the above notations, the h i ∈ µ [ ,k] projection of A is Proof. Since Σµ· is a finite dimensional vector space and h i any finite dimensional inner product space is a Hilbert P ( A )= C , (94) space [14], the conclusion follows. ✷ F h i h i where C is defined in (93). Proposition 95 ([14]) Let E be an inner product space, 0 = F E be a Hilbert subspace. { } 6 ⊂ We give an example to depict this. (1) For each x E there exists a unique y := P (x) ∈ F ∈ Example 97 Given F , called the projection of x on F , such that 1 2 3021 x y = min x z . (88) − k − k z F k − k   ∈ A = 2 1 2 11 0 Σ3 . ∈ 0.5  − −  (2)   0 1 1 3 1 2  − − −  1   F ⊥ := PF− (0) (89) [ ,2] We consider the projection of A onto Σ · . Denote h i 0.5 t =2 3=6. Using formulas (93)–(94), we have is the subspace orthogonal to F . ∨ (3) 1 0 1/3 0 PF ( A )= . E = F F ⊥, (90) ⊕ h i *0 1/3 0 1+ − − where stands for orthogonal sum.   ⊕ Then we have Using above proposition, we consider the projection: [ ,α] β α E = A PF ( A ), P : Σ Σµ· . Let A Σ . Assume X Σ , h i h i⊢ h i F µ → h i ∈ µ h i ∈ µ t = α β. Then the norm of A X is: ∨ h i ⊢ h i where 1 A X = A It/β X It/α . (91) E = kh i ⊢ h ik √t ⊗ − ⊗ F

002 0 3 0 1/3 0 2010 Set p = µy q = µx, and k := t/α. We split A as − −  000 2 0 3 0 1/3020 1  − −   A1,1 A1,2 A1,qα  200 0 2 0 1 02/300 0  ···  − −  .     A2,1 A2,2 A2,qα  02 0 4/3 0 2 0 1 020 0  ···   A It/β =  .  ,  − −  ⊗  .     .   0 0 1 0 2/30 3 0 10 1 0     − − −      Apα,1 Apα,2 Apα,qα  00 0 1 0 2/30 3 010 1   ···   − − −     

23 It is easy to verify that E and A are mutually orthog- comparable if for any U and V , as long as their di- h i h i i j onal. mensions are equal, they are Cr comparable. (Where r could be , that is they are C∞ comparable; or ω, [k, ] [α,β] ∞ We also have Σµ · and Σµ (where α β) as metric which means they are analytically comparable.) | subspaces of Σµ. Remark 100 In Definition 99 for a bundled coordinate k Finally, we would like to point out that since Σµ is an chart U = i=1 Ui, we can, without loss of generality, as- infinity dimensional vector space, it is possible that Σµ is sume dim(U ), i =1, , k are distinct. Because simple Ti ··· isometric to its proper subspace. For instance, consider coordinate charts of same dimension can be put together the following example. by set union . Hereafter, this is assumed. ∪ [k, ] y V Example 98 Consider a mapping ϕ : Σ Σµ · de- ∈ µ → fined by A A Ik . It is clear that this mapping h i 7→ h ⊗ i U1 satisfies

A B = ϕ( A ) ϕ( B ) , A , B Σ . ϕ ψ k h i ⊢ h ik k h i ⊢ h i k h i h i ∈ µ s U V R Θ 2 1 Ω Rs ⊃ ⊂ That is, µ can be isometrically embedded into its proper M 1 1 subspace. ϕ− ψ−

V 4 Differential Structure on M-equivalence x U 2 ∈ Space

4.1 Bundled Manifold Fig. 5. Multiple Coordinate Charts

Unlike the conventional manifolds, which have fixed di- To depict the bundle coordinate chart, we refer to Fig. 5, mensions, this section explores a new kinds of manifolds, where two bundled coordinate charts U and V are de- called the bundled manifold. Intuitively speaking, it is a scribed. Note that U = U U and V = V V , where 1 ∩ 2 1 ∩ 2 fiber bundle, which has fibers belonging to manifolds of U1,U2, V1, V2 are simple coordinate charts. Also, U different dimensions. To begin with, the following defi- and V are coordinate neighborhoods of x and y respec- nition is proposed, which is a mimic to the definition of tively. Next, we assume that dim(U2) = dim(V1) = s, an n-dimensional manifold [1]. U V = , and U V = . U and V are Cr compa- 2 ∩ 1 6 ∅ 1 ∩ 2 ∅ rable, if and only if the two mappings Definition 99 Let M, be a topological space. { T} 1 ψ φ− : φ(U2 V1) ψ(U2 V1) and (1) An open set U = is said to be a simple coordinate ◦ ∩ → ∩ 6 ∅ s 1 chart, if there is an open set Θ R and a home- φ ψ− : ψ(U2 V1) φ(U2 V1) ⊂ ◦ ∩ → ∩ omorphism φ : U Θ Rs. The integer s is said → ⊂ to be the dimension of U. are Cr. As a convention, we assume dim(U ) = dim(U ), 1 6 2 (2) An open set U = is said to be a bundled coordinate dim(V ) = dim(V ). 6 ∅ 1 6 2 chart, if there exist finite simple coordinate charts si r Ui with homeomorphisms φi : Ui Θi R , Definition 101 A topological space M is a bundled C → ⊂ i =1, , k, k< is set U depending, such that (or C , or analytic, denoted by Cω) manifold, if the ··· ∞ ∞ following conditions are satisfied. k U = U . i (1) M is second countable and Hausdorff. i=1 \ (2) There exists an open cover of M, described as (3) Let U, V be two bundled coordinate charts. U = k1 k2 r Ui, V = Vj . U and V are said to be C = U λ Λ , ∩i=1 ∩j=1 C { λ | ∈ }

24 where each Uλ is a bundled coordinate chart. More- ks r M imbedding over, any two bundled coordinate charts in are C µ k C x project comparable. (rt) (3) If a bundled coordinate chart V is comparable with rt x Mµ Uλ, λ Λ, then V . x2 ∀ ∈ ∈C 2s Mµ It is obvious that the topological structure of Σµ with 2 x(2t) natural (Ri µy µx ) coordinates on each cross section (or M 2t µ x1 leaf) meets the above requirements for a bundled mani- M s µ x(s) fold. Hence, we have the following result. t Mµ x(t) (2) 2 x Theorem 102 Σµ is a bundled analytic manifold. Mµ x(1) M 1 Proof. Condition 1 has been proved in Theorem 64. For µ condition 2, set p = µy, q = µx and Fig. 6. Lattice-related Coordinates

k 1 Ok := µ, k =1, 2, . Class 1 (Embedded Elements): Starting from x M ··· • s ∈ µ, we have Choosing any finite open subset o O , s = M is ⊂ is 1, 2, ,t, t < , and constructing corresponding x1 x2 x3 . ··· ∞ ≺ ≺ ≺ · · · s(o ), ,s(o ). Set U := s(o ) s(o ), where i1 ··· it I i1 ∩···∩ it I = i ,i , ,i . Define Class 2 (Projected Elements): Let x(t) t , where { 1 2 ··· t} • ∈ Mµ t s. Then | W := UI ; I is a finite subset of N . x(t) x1. { | } ≺ Particularly, x(1) 1 satisfies Then W is an open cover of M. Identity mappings from ∈Mµ i ip iq s(oi) µ R × makes any two UI and UJ being (1) 1 ω → M ≃ x x . C comparable. As for condition 3, just add all bundled ≺ coordinate charts which are comparable with W into W , Class 3 (Embedded Elements from Projected Ele- the condition is satisfied. ✷ • ments): Starting from any x(t), t s, we have | Next, we consider the lattice-related coordinates on Σµ. x(t) x(2t) x(3t) . ≺ ≺ ≺ · · · Consider Σµ and assume p = µy and q = µx. Then Σµ has leafs Remark 103 (1) Classes 1–3 are the set of coordi- Σ = i i =1, 2, , nates, which are related to a given irreducible ele- µ Mµ | ··· i ment x1. where µ = ip iq. M M ×  (2) Elements in Class 1 are particularly important. Say, we may firstly define a root element on s , such Consider an element x Σµ, then there exists µ s M h i ∈ as A1 . Then we use it to get an equivalent a unique irreducible x1 x such that x = ∈ Mµ ∈ h i h i s class, such as A , and use the elements in this class xj = x1 Ij j =1, 2, . Now assume x1 µ. h i { ⊗ | s ···} ∈ M to perform certain calculation, such as STP. All the As defined above, µ is the root leaf of x . M h i elements in the equivalent class, such as A , are of h i It is obvious that x has different coordinate represen- Class 1. h i tations on different leafs. But because of the subspace (3) The elements in subspace and their equivalent lattice structure, they must be consistent. Fig. 6 shows classes are less important. Sometimes we may con- the lattice-related subspaces. Any geometric objects de- cern only the elements of Class 1, say for STP etc. fined on its root leaf must be consistent with its repre- (4) The subspace elements obtained by project mapping sentations on all embedded spaces and projected spaces. may not be “uniformed” with the object obtained from the real subspaces of the original space. More As shown in Fig. 6 the following subspaces are related: discussion will be seen in the sequel.

25 (5) Because of the above argument, sometimes we may Then we can determine the other expressions of f as consider only the equivalent classes which have their follows: root elements defined on their root leaf. Therefore, the objects may only be defined on the multiple of Consider f : • 1 the root leaf (root space). a a 1 1 2 r f1 = f2 I2 = a . 4.2 C Functions on Σµ     ⊗  1 a2 a2      Definition 104 Let M be a bundled manifold, f : M → Consider f . Let A = (a ) 3. Then R is called a Cr function, if for each simple coordinate • 3 i,j ∈M2 s r r chart U µ f U := fs is C . The set of C functions ⊂M | r 1 on M is denoted by C (M). 3 (a11 + a22 + a33) f3(A)= f1(pr3(A)) = f1  1  r 3 (a41 + a52 + a63) Assume f C (Σµ), A, B µ and A B. Then ∈ ∈ M ∼ 1 2   f is well defined on Σµ means it is defined on different = 9 (a11 + a22 + a33) . leafs consistently, and hence on leafs corresponding to A and B we have f(A)= f(B). To this end, the f can be Similarly, for any n =2k 1 we have − constructed as follows: 1 2 fn(A)= 2 (a11 + a22 + + ann) . Definition 105 Assume f is firstly defined on root leaf n ··· s µ as fs(x). Then we extend it to other leafs as: Consider f . Let A = (a ) 4. Then M • 4 i,j ∈M2 Step 1. Let Q = t N t s . Then • { ∈ | } f4(A)= f2(pr2(A))

ft(y) := f s(x = bdk(y)), (95) 1 1 2 (a11 + a22) 2 (a13 + a24)

s  1 1  (a31 + a42) (a33 + a44) where k = t N. 2 2 ∈ = f2   Step 2. Assume gcd(ℓ,s) = t. If ℓ = t, fℓ = ft has  1 1  •  2 (a51 + a62) 2 (a53 + a64) already been defined in Step 1. So we assume ℓ = kt.    1 1  Then  (a71 + a82) (a73 + a84)  2 2  1   = 4 [(a11 + a22)(a33 + a44) (a53 + a64)(a71 + a82)] fℓ(y) := ft(x = prk(y)). (96) − Similarly, for n =2k (k 2) we have Note that in Step 2, t = s is allowed. ≥

1 Then it is easy to verify the following: fn(A) = 2 [(a11 + a22 + + akk) k ··· (a + a + + a ) Proposition 106 The function f defined in Definition k+1,k+1 k+2,k+2 ··· 2k,2k 105 is consistent with the equivalence . Hence it is well (a + a + + a ) ∼ − 2k+1,k+1 2k+2,k+2 ··· 3k,2k defined on Σ . µ (a + a + + a )] . 3k+1,1 3k+2,2 ··· 4k,k Example 107 Consider Σ2, and assume f is defined on 2 Remark 108 For a smooth function f defined firstly on its root leaf 2 firstly as s [ ,s] [s, ] M , its extensions to both µ· and µ · are consis- Mµ M M tently defined. Hence, f is completely well posed on Σµ. a11 a12   a21 a22 4.3 Generalized Inner Products f2   := a11a22 a32a41.   − a31 a32   We define the generalized Frobenius inner product as   a41 a42 follows.    

26 Definition 109 Given A m n and B p q. and B is defined as ∈M × ∈M ×

(A B) := Case 1 (Special Case): Assume p = rm and q = sn. Split F B into equal blocks as (A B) (A B) (A B) 1,1| F 1,2| F ··· 1,η| F (A B) (A B) (A B)  (98) 2,1| F 2,2| F ··· 2,η| F  .  .  .  B1,1 B1,2 B1,s   ···       B B B (Aξ,1 B)F (Aξ,2 B)F (Aξ,η B)F  2,1 2,2 ··· 2,s  | | ··· |  B =  .  ,    .   .    Note that here (Ai,j B)F is the (Case 1) generalized   | Br,1 Br,2 Br,s Frobenius inner product defined in (97).  ···    Example 110 Let where Bi,j m n, i = 1, , r; j = 1, ,s. ∈ M × ··· ··· Then the generalized Frobenius inner product of A 1 110 and B is defined as 2 A = − 0.5, 1 2 01 ∈M   (A B)F := and

(A B1,1) (A B1,2) (A B1,s) 1 0 | F | F ··· | F     (A B2,1) (A B2,2) (A B2,s) (97) 1 2 | F | F ··· | F . − 1  .  B =   3/2.  .    ∈M  .   1 0    −      (A Br,1)F (A Br,2)F (A Br,s)F   1 1  | | ··· |   −      Note that m =2, n =4, p =4, q =2, α = gcd(m,p)= 2, β = gcd(n, q)=2. Then we split A and B as follows Note that here (A B ) is the standard Frobenius | i,j F inner product defined in (71). Case 2 (General Case): Assume A m n and B B ∈ M × ∈ 1,1 p q and let the great common divisor of m, p be A = A1,1 A1,2 , B = , M ×   α = m p, and the great common divisor of n, q h i B2,1 ∧ be β = n q. Denote by ξ = m/α and η = n/β,   ∧ r = p/α and s = q/β. Then we split A into ξ η were Ai,j , Bk,ℓ 2 2, i =1, 2; j =1, 2; k =1, 2; ℓ = × ∈M × blocks as 1.

Finally, we have A A A 1,1 1,2 ··· 1,η   A2,1 A2,2 A2,η (A B ) (A B ) 4 3 ··· 1,1| 1,1 1,2| 1,1 A =  .  , (A B)F = = .  .  |      .  (A1,1 B2,1) (A1,2 B2,1) 2 2   | | − −       Aξ,1 Aξ,2 Aξ,η  ···  α β   Definition 111 Assume A and B . µy ∈ Mµ ∈ Mλ ∧ λ = s, µ λ = t, µy = m, µx = n, λy = p, λx = q. y x ∧ x s t s t where Ai,j α β, i = 1, , ξ; j = 1, , η. Since s, t are co-prime, denote σ = s/t, then σy = s and ∈ M × ··· ··· Then the generalized Frobenius inner product of A σx = t.

27 Split A as Definition 113 Let µ,δ Q , µ is said to be superior ∈ + to δ, denoted by

A1,1 A1,2 A1,n µ δ, ··· ≫   A A A if δy µy and δx µx. 2,1 2,2 2,n | | A =  ···  ,      ···  The δ inner product is a mapping ( ): µ δ Σµ   ·|· ≫ × Am,1 Am,2 Am,n µ δ Σµ Σδ.  ···  ≫ → S   α S where Ai,j σ ; and split B as Definition 114 Assume A α, B β and µ δ, ∈M ∈Mµ ∈Mλ ≫ λ δ. Denote µ /δ = ξ, µ /δ = η, λ /δ = ζ, and ≫ y y x x y y B1,1 B1,2 B1,q λ /δ = ℓ, then the δ-inner product of A and B is defined ··· x x   as follows: Split B2,1 B2,2 B2,q B =  ···  ,      ···  A1,1 A1,2 A1,η   ··· Bp,1 Bp,2 Bp,q   ··· A2,1 A2,2 A2,η   ···   A =  .  , β  .  where Bi,j σ. Then the generalized weighted inner  .  ∈ M   product is defined as   Aξ,1 Aξ,2 Aξ,η  ···    (A B)W := α where Ai,j δ , and (A B ) (A B ) ∈M 1,1| 1,1 W ··· 1,1| 1,q W ···  (A B ) (A B ) B1,1 B1,2 B1,ℓ 2,1| 2,1 W ··· 2,1| 2,q W ··· ···  .    . B2,1 B2,2 B2,ℓ  ···  B =  .  ,  (99)  .   (Am,1 Bp,1)W (Am,1 Bp,q)W  .   | ··· | ···      (A1,n B1,1) (A1,n B1,q) Bζ,1 Bζ,2 Bζ,ℓ | W ··· | W  ···     (A2,n B2,1) (A2,n B2,q) | W ··· | W , β .  where Bi,j δ . Then the δ-inner product of A and B .  ∈M .  is defined as   (Am,n Bp,1) (Am,n Bp,q)  | W ··· | W   C1,1 C1,2 C1,η ··· where (Ai,j Br,s) are defined in (78). | C C C  2,1 2,2 ··· 2,η (A B)δ :=  .  , (101) Definition 112 Assume A Σµ and B Σλ. Then |  .  h i ∈ h i ∈  .  the generalized inner product of A and B , denoted by   h i h i   ( A B ), is defined as Cξ,1 Cξ,2 Cξ,η h i | h i  ···    ( A B ) := (A B)W . (100) where h i | h i h | i

(A B ) (A B ) (A B ) Of course, we need to prove that (100) is independent i,j | 1,1 W i,j | 1,2 W ··· i,j | 1,ℓ W of the choice of representatives A and B. This is verified (A B ) (A B ) (A B )  i,j | 2,1 W i,j | 2,2 W ··· i,j | 2,ℓ W by a straightforward computation. Ci,j :=  .  .  .   .    Next, we would like to define another “inner product”   (Ai,j Bζ,1)W (Ai,j Bζ,2)W (Ai,j Bζ,ℓ)W  called the δ-inner product, where δ Q+. First we in-  | | ··· |  ∈   troduce a new notation: (102)

28 Definition 115 Assume A Σ and B Σ , where k = s N. h i ∈ µ h i ∈ λ t ∈ where µ δ and λ δ. Then the δ-inner product of Step 2. Assume ℓ s = t. If ℓ = t, X = X has already ≫ ≫ • ∧ ℓ t A and B is defined as been defined in Step 1. So we assume ℓ = kt. Then 1 h i h i ( A B ) := (A B) , A A , B B . (103) δ δ Xℓ(y) := (bdk) (Xt)(x = prk(y)) Ik. (106) h i | h i | ∈ h i ∈ h i ∗ ⊗ Remark 116 (1) It is easy to verify that (103) is inde- Next, we consider the computation of the related expres- pendent of the choice of A and B. Hence the δ-inner sions of a vector field, which is originally defined on its product is well defined. root leaf. (2) Definition 111 ( or Definition 112 ) is a special case of Definition 114 (correspondingly, Definition 115 To calculate (105) we first set ). • (3) Definition 109 cannot be extended to the equivalence x = y Ik (107) space, because it depends on the choice of represen- ⊗ tatives. to get Xs(x(y)) := Xs(y). Then we split Xs(y) into (4) Unlike the generalized inner product defined in Def- i,j i,j tp tq blocks as Xs = [Xs ], where each Xs inition 111 (as well as Definition 112 ), the δ-inner × ∈ k k. Then Xt = [Xi,j ] Tx ( tp tq), and product is defined on a subset of (or Σ). M × ∈ M × M X = Tr [Xi,j] . (108) Using δ-inner product, we have the following. i,j s  Proposition 117 Assume ϕ : Σµ Σλ is a linear To calculate (106) we split y into tp tq blocks as → • i,j i,j × y = [y ], where each y k k. Then Xt(y) is mapping. Then there exists a matrix Λ rµy λy rµxλx , ∈M × ∈ M × called the structure matrix of ϕ, such that obtained by replacing x by x = [xi,j ] tp tq as ∈M × ϕ( A ) = (A Λ) . (104) x = Tr [yi,j ] . (109) h i | µ i,j  It follows that 4.4 Vector Fields X (y)= X (y) I . (110) Definition 118 Let M be a bundled manifold and T (M) ℓ t ⊗ k the tangent space of M. V : M T (M) is called a Cr → Then it is easy to verify the following: vector field, if for each simple coordinate chart U, V |U is Cr. The set of Cr vector fields on M is denoted by Proposition 120 The vector field X defined in Defi- V r(M). nition 119 is consistent with the equivalence [s,·] . ∼ Mµ Hence the equivalent class X is well defined on We express a vector field in a matrix form. That is, let h i r [s, ] X V ( m n). Then T Σµ · . ∈ M × m n   ∂ 1 X = fi,j (x) := [fi,j (x)] m n. Let M and N be two manifolds, and ϕ : M → N a smooth ∂x ∈M × i=1 j=1 i,j mapping, x ∈ M, y ∈ N, and ϕ(x)= y. Then X X · ϕ∗ : Ty(N) → Tx(M), satisfying Similar to smooth functions, the vector fields on µ can M ∞ be defined as follows: ϕ∗(X)h(y)= LX (h◦ϕ)(x), ∀h(y) ∈ C (N), ∀X ∈ Tx(M),

Definition 119 Assume X is firstly defined on where LX is the Lie derivative with respect to X; s sh i T as Xs(x), i.e., is the root leaf of X . Then ∗ ∗ ∗ Mµ Mµ h i · ϕ : Tx (M) → Ty (N), satisfying we extend it to other leafs as:  ∗ ∗ ϕ (ω)Y = ω(ϕ∗(Y )|x), ∀Y ∈ Ty(N), ∀ω ∈ Tx (M). Step 1. Let Q = t N t s . Then for • { ∈ | } We refer readers to [1] for concepts, and to [42] for compu- Xt(y) := (prk) ( Xs)(x = bdk(y)), (105) ∗ tations.

29 Remark 121 In fact, Proposition 120 only claims that where the representations on embedded supper spaces are con- 1 sistent, which is obviously weaker than Proposition 106. G1(z) = f1(pr3(z)) = 6 (z1,1 + z2,2 + z3,3 Please refer to Remark 103 and the latter Remark 126 +z + z + z ) for the extension of X to the projected subspace. 1,4 2,5 3,6 h i 1 G2(z) = f2(pr3(z)) = 6 (z1,1 + z2,2 + z3,3) . Example 122 Consider Σ1/2. Assume X is firstly de- fined on T 2 as Similarly, for n =2k 1 we have M1/2 −   X (z) = [G (z), G (z)] I T n , 11 12 n 1 2 n 1/2 X2(x)= F (x)= F (x) F (x) , ⊗ ∈ M   h i where where x = (x ) 2 and i,j ∈M1/2 G (z) = 1 (z + z + + z 1 2n 1,1 2,2 ··· n,n 1,1 x1,1 0 1,2 x2,2 0 F (x)= ; F (x)= . + z1,n+1 + z2,n+2 + + zn,2n) ···  0 x  x 0 1,3 1,1 G (z)== 1 (z + z + + z ) .     2 2n 1,1 2,2 ··· n,n Then we consider the expression of X on the other cross Consider X 4 . Set sections: • 4 ∈M1/2

1 4 Consider X T . Set X4(z) = (gi,j (z)) T , • 1 ∈ M1/2 ∈ M1/2     where z = (z ) 4 . Consider the projection X (y)= , i,j 1/2 1 f1(y) f2(y) 4 2∈ M pr2 : : h i M1/2 →M1/2 1 where y = (y1,y2) 1/2. According to (105), ∈M z1,1+z2,2 z1,3+z2,4 z1,5+z2,6 z1,7+z2,8 2 2 2 2 pr2(z)= . 1 1,1 1,1  z3,1+z4,2 z3,3+z4,4 z3,5+z4,6 z3,7+z4,8  f1(y) = 2 F1,1 (bd2(y)) + F2,2 (bd2(y)) 2 2 2 2   = y1+y2 ;  2 According to (106), and G1,1(z) G1,2(z) G1,3(z) G1,4(z) X (z)= I , 1 1,2 1,2 4   ⊗ 2 f2(y) = 2 F1,1 (bd2(y)) + F2,2 (bd2(y)) G2,1(z) G2,2(z) G2,3(z) G2,4(z) 1     = y1. 2 where

3 Consider X3 T 1/2 . Set z1,1+z2,2 • ∈ M G1,1(z) = 2 , G1,2(z)=0,   z +z 3 3,3 4,4 X (z) = (g (z)) , G1,3(z) = 2 , G1,4(z)=0, 3 i,j ∈M1/2 z1,5+z2,6 G2,1(z)=0, G2,2(z) = , 3 2 where z = (zi,j) . 1/2 z +z ∈M 3 1 1,1 2,2 Consider the projection pr : : G2,3(z) = 2 G2,4(z)=0. 3 M1/2 →M1/2

z +z +z z +z +z n pr (z)= 1,1 2,2 3,3 1,4 2,5 3,6 . Similarly, for n =2k we have Xn T 1/2 as 3 3 3 ∈ M h i   According to (106), G1,1(z) G1,2(z) G1,3(z) G1,4(z) X (z)= I , n   ⊗ k X3(z) = [G1(z), G2(z)] I3, G2,1(z) G2,2(z) G2,3(z) G2,4(z) ⊗  

30 where cross sections of ξ on corresponding leafs. The following h i result is then obvious: z , +z , + +z 1 1 2 2 ··· k,k G1,1(z) = k , Theorem 124 Assume s τ. The corresponding cross G (z)=0, | 1,2 section of the bundled integral curve is the integral curve, z +z + +z k+1,k+1 k+2,k+2 ··· 2k,2k denoted by G1,3(z) = k ,

τ G1,4(z)=0, ξ . Xτ (t)=Φt (bdk(A0)), k = τ/s, G2,1(z)=0, satisfying z +z + +z 1,2k+1 2,2k+2 ··· k,3k G2,2(z) = k ,

z , +z , + +z ˙ τ 1 1 2 2 ··· k,k Xτ (t)= ξ (Xτ ) , G2,3(z) = k , (114) (Xτ (0) = bdk(A0), G2,4(z)=0. where ξτ is defined by (106). 4.5 Integral Curves Note that a cross section here is a mapping from base Definition 123 Let ξ V r(Σ ) be a vector field. h i ∈ µ space to a leave such that (63) holds. Then for each A Σ there exists a curve X(t) such h i ∈ µ h i that X(0) = A and Fig. 7 shows the integral curve and its projections on h i h i each leaf. X˙ (t) = ξ ( X(t) ) , (111) h i h i D E < X(t) > which is called the integral curve of ξ , starting from A . h i h i In fact, the integral curve of ξ is a bundled integral Xi(t) h i Ai curve. Assume A = A0 , where A0 m n is irre- h i h i ∈ M × M ducible. Here m, n may not be co-prime but m/n = µ. i So we denote A s . Then on this root leaf we denote 0 ∈Mµ s ξ = ξ T ( m n) , × h i ∩ M Aj Xj(t) s M and the integral curve of ξ , starting from A0 is a stan- j dard one, which is the cross section of the bundled in- tegral curve X passing through A . That is, it is the h i 0 solution of Fig. 7. Integral Curve of Vector Field s X˙ s(t)= ξ (Xs), (112) The following result comes from the construction di- X (0) = A . ( s 0 rectly.

We may denote the solution as α Theorem 125 Assume Aα = A µ, Aβ = A s β α α β h i∩M β h i ∩ ξ µ, ξ = ξ T µ , ξ = ξ T µ , and α = Xs(t)=Φt (A0). (113) M h i ∩ M h i ∩ M kβ, k 2. ξ is defined firstly on T s , where s β. ≥  M µ  | Then there exists a one-to-one correspondence between  Next, we consider the other cross sections of the inte- the two cross sections (or corresponding integral curves). gral curve, which correspond to the cross sections of ξ Precisely, h i respectively. ξα ξβ Φt (Aα)= prk Φt (prk(Aα)) , Recall Definition 119, we can get the cross sections of (115) ξβ h ξα i the bundled integral curve on each leafs by using the Φt (Aβ)= bdk Φt (bdk(Aβ)) . h i

31 Particularly, assume the vector field is a linear vector and field and X 1, then we have 0 ∈Mδ et 000 0100 2 2 1 X1,1 = ; X1,2 = ; Φξ = Xk X , i =1, ,m; j =1, , n.  0 000 0000 i,j i,j 0 δ ··· ··· (116)   t −t    0 0 e +e 0 0001 2 2 2 X1,3 = ; X1,4 = ;  et e−t    0 − 0 0 0000 Moreover, on sm sn, the cross section of the integral 2 M ×   et e−t  curve can be expressed by modifying (116) as 0000 0 0 − 0 2 2 2 X2,1 = ; X2,2 = ;    et+e−t  ξ ξ1 1000 0 2 0 0 Φi,j = Φi,j Is, i =1, ,m; j =1, , n. (117)     ⊗ ··· ··· 0000 0000 2 2 X2,3 = ; X2,4 = . Remark 126 Note that if ξ is firstly defined on t 010 0001 • h i T s , t

s (ii) Project ξ (y = bdk(x)) onto the tangent space of t An integral manifold of an involutive distribution on the subspace T Ik . Mµ ⊗ T (Σ ) can be defined and calculated in a similar way. According to (ii), ξ(t)(A) does not correspond to µ  ξs(A I ). Hence, the relationship demonstrated in ⊗ k 4.6 Forms Theorems 124 and 125 are not available for the inte- gral curves of ξ(t) and ξs respectively. Definition 128 Let M be a bundled manifold. ω : M Because of this argument, if ξ is firstly defined r → s s h i T ∗(M) is called a C co-vector field (or one form), if on T µ , where µ is the root leaf for ξ , then r M M h i τ for each simple coordinate chart U ω U is a C co-vector we are mainly concerning the integral curves of ξ on r | τ  field. The set of C co-vector fields on M is denoted by T µ , where s τ. r M | V ∗ (M). The projections of ξ = ξs onto subspaces ξ(t) are •  h i h i useful in some other problems. For instance, assume We express a co-vector field in matrix form. That is, let ξ is a constant vector field. Using notations in The- h i ω V ∗ ( m n). Then orem 125, we set ∈ M × m n ω = ωi,j (x)dxi,j := [ωi,j (x)] m n. I,J ∈M × := Span D I =1, , αp; J =1, , αq , i=1 j=1 D ··· ··· X X  I,J β where D is defined in (69). Then the projection of ξ Similar to the construction of vector fields, the co-vector on , denoted by ξβ , is well defined. Moreover, (115) D D fields on Σµ can be established as follows: becomes Definition 129 Assume ω is firstly defined on α ξβ s h is ξ D T ∗ µ as ωs(x), where µ is the root leaf of ω . Φt (Aα)= prk Φt (prk(Aα)) , M M h i (118) Then we extend it to other leafs as: β  α   ξD ξ Φt (Aβ)= bdk Φt (bdk(Aβ )) . Step 1. Let Q = t N t s . h i • { ∈ | }

Example 127 Recall Example 107. Since it is a linear ωt(y) := k (bdk)∗ (ωs)(x = bdk(y)) , (119) vector field, it is easy to calculate that the cross section on s   2 4 can be expressed as in (116), where X0 2 4 where k = N. M × ∈ M × t ∈

32 Step 2. Assume ℓ s = t. If ℓ = t, ω = ω has already complexity, we consider only the covariant tensor, t • ∧ ℓ t h i ∈ been defined in Step 1. So we assume ℓ = kt, k 2. Tα(Σ ). ≥ µ Then Definition 132 A covariant tensor field t Tα (Σ ) 1 h i ∈ µ ω (y) := (pr )∗ (ω )(x = pr (y)) I . (120) is a multi-linear mapping ℓ k k t k ⊗ k   Similar to the calculation of vector fields, to calculate t : V r(Σ ) V r(Σ ) Cr(Σ ). h i µ ×···× µ → µ (119) we first set α

x = y I (121) | {z } ⊗ k Assume t is defined on root leaf at x s as t h i ∈ Mµ s ∈ to get ωs(x(y)) := ωs(y). Then we split ωs(y) into tp tq α s T µ , p, q are co-prime and p/q = µ. The calcu- i,j i,j × M blocks as ωs = [ωs ], where each ωs k k. Then lation is performed as follows: Construct the structure ∈ M ×  ωt = [ωi,j ] Tx∗ ( tp tq), where the entries are matrix of t as ∈ M × s i,j ωi,j = Tr [ωs ] , i =1, ,tp; j =1, , tq. (122) ··· ··· Ms(x) :=  1, ,1 1, ,1 1, ,1 t1,··· ,1(x) t1,··· ,2(x) tsq,··· ,sq(x) To calculate (120) we split y into tp tq blocks as y = ··· ··· ··· ··· i,j i,j ×  1, ,2 1, ,2 1, ,2  [y ], where each y k k. Then ωt(y) is obtained t1,··· ,1(x) t1,··· ,2(x) tsq,··· ,sq(x) × ··· ··· ··· ··· ∈ M  .  (126) by replacing x by x = [xi,j ] tp tq as  .  ∈M ×  .    x = Tr [yi,j ] . (123)  sp, ,sp sp, ,sp sp, ,sp  i,j t1, ···,1 (x) t1, ···,2 (x) tsq,··· ,sq (x)  ··· ··· ··· ···     It follows that (sp)α (sq)α , ∈M × 1 ω (y)= (ω (y) I ) . (124) where ℓ k t ⊗ k

i1, ,iα ∂ ∂ ∂ t ··· (x)= t , , , , j1, ,jα s ∂xi ,j ∂xi ,j ∂xi ,j Then it is easy to verify the following: ··· 1 1 2 2 ··· α α x   id =1, ,sp; jd =1, , sq; d =1, , α; ··· ··· ··· Proposition 130 The co-vector field ω defined h i x sp sq. by Definition 129 is consistent with the equivalence ∈M × [s, ] [s,·] . Hence it is well defined on Σµ · . ∼ Mµ k Consider X V (Σµ) with its irreducible element Definition 131 Let ω V ∗ (Σµ) and X V (Σµ). ∈ h i ∈ h i ∈ Xk V s , where s is the root leaf of Xk . Xk Then the action of ω on X is defined as s ∈ M µ Mµ s h i h i is expressed in matrix form as  ω ( X ) := ω ( X . (125) sp sq h i h i h i h i W ∂ Xk := vk := vk := V k, k =1, .α.  s i,j ∂x ij s ··· i=1 j=1 i,j Similar to vector field case, if the co-vector field is firstly X X   defined on s , then we can assume ω is only defined Mµ h i on τ satisfying Then we have Mµ s ⊏ η . Mµ Mµ Proposition 133

t X 1 , , X α 4.7 Tensor Fields s h i ··· h i  1 2 α = Ms(x) V V V (127) The set of tensor fields on Σµ of covariant order α and ⊗ ⊗···⊗ 1 2 α contravariant order β is denoted by Tα(Σ ). To avoid = Ms V V  V . β µ ··· ··· W W ··· W    

33 Next, we calculate the expressions of t on other leafs. x = (xi,j ) 2 3 with its structure matrix as h i ∈M × The following algorithm can be verified to be consistent [s, ] s on µ · , where is the root leaf of t . M Mµ h i 1 x12 0 00100 x22 α s   Algorithm 1 Assume t is firstly defined on T µ 01 0 10000 0 h i M − as ts(x). Then we extend it to other leafs as: M1(x)=   . (132)    1 0 10001 1 0   − −  Step 1. Denote Q := r r s . Let τ Q and k = s 01 0 00 x 0 0 x  • | ∈ τ ∈  22 23 N. Then     

(1) Let X( x ), Y ( x ) V (Σµ) be defined at tτ (y) = (bdk)∗ (ts)(bdk(y)) (128) h i h i ∈ x 2 3 as ∈M ×

can be calculated by constructing Mτ (y) (y α ∈ T ( τp τq)) as follows: First, split Ms(bd(y)) into M × x13 0 0 1 0 1 (τp)α (τq)α blocks X(x)= , Y (x)= . ×    2  0 0 x21 0 x11 0     Ms(bdr(y)) = 1, ,1 1, ,1 1, ,1 Evaluate t (X, Y ). T1, ···,1 (y) T1, ···,2 (y) Trq,··· ,τq(y) h i ··· ··· ··· ··· First, using (99), we calculate that  1, ,2 1, ,2 1, ,2  T1, ···,1 (y) T1, ···,2 (y) Trq,··· ,τq(y) ··· ··· ··· ···  .  ,  .   .    t (X, ) = M (x) X  τp, ,τp τp, ,τp τp, ,τp  1 · 1 W T1, ···,1 (y) T1, ···,2 (y) tτq,··· ,τq (y)  ··· ··· ··· ···  x13 + x 21 x12x13 x21 0   = − . i , ,iα   1 ··· 0 x13 x21x23 where each block Tj1, ,jα k k. Then we set ··· ∈M ×  

α i1, ,iα M (y) := k [ξ ··· (y)] α α , (129) r j1 , ,jα (τp) (τq) Then ··· ∈M ×

where 2 t1(X, Y )= x13 1+ x + x21. i , ,iα i , ,iα 11 1 ··· 1 ··· ξj , ,jα (y) = Tr Tj , ,jα . 1 ··· 1 ···    Step 2. Assume ℓ s = τ. If ℓ = τ, tℓ = tτ has already • ∧ (2) Expressing t on leaf 4 6: Note that been defined in Step 1. So we assume ℓ = kτ, where h i M × k 2. Then ≥ y11+y22 y13+y24 y15+y26 2 2 2 tℓ(z) = (prk)∗ (tτ )(prk(y)) (130) x = pr2(y)= ,  y31+y42 y33+y44 y35+y46  2 2 2   can be calculated by constructing

where y = [yi,j ] 4 6. ∈M × 1 Using (131), we have Mℓ(z) := [Mτ (y = prk(z)) Ikα ] , z ℓp ℓq. kα ⊗ ∈M × (131)

1 t11 t12 t13 Example 134 Consider a covariant tensor field t M2(y)= 2 , 2 2 h i ∈ 2 t21 t22 t23 (Σµ), where µ = 3 . Moreover, t is firstly defined at T h i  

34 where Definition 136 [31] A set R with two operators +, × 1 y13+y24 0 is a . If the followings hold: t11 = 2 I ;   ⊗ 2 0 1 0 (1) (R, +) is an Abelian group;   0 01 (2) (R, ) is a monoid; 12 × t = I2; (3) (Distributive Rule)  100 ⊗ −   0 0 y33+y44 (a + b) c = a c + b c 13 2 t = I2; × × × 00 0  ⊗ c (a + b)= c a + c b, a,b,c R. × × × ∈   1 0 1 21 t = − I2; Observing 1, which consists of all square matrices, ± M 01 0  ⊗ both (including ) and ⋉ are well defined. Unfortu-

± ⊢   nately, ( 1, ) is not a group because there is no unit

00 0 M ± 22 t = I2; element. Since both and ⋉ are consistent with the 0 0 y33+y44  ⊗ equivalence , we consider Σ := / . Then it is easy 2 ∼ M1 ∼   to verify the following: 1 1 0 23 t = − I2. ±  y35+y46  ⊗ Proposition 137 (Σ, , ⋉) is a ring. 0 0 2   Consider a on a ring R, p : R R, as 5 Lie Algebra on Square M-equivalence Space →

n n 1 p(x) = anx + an 1x − + + a0, 5.1 Ring Structure on Σ − ··· (135) a R, i =0, 1, , n. i ∈ ··· Consider the vector space of the equivalent classes of ± square matrices Σ := Σ1. Since Σ is closed under the It is obvious that this is well defined. Set R = (Σ, , ⋉), STP, more algebraic structures may be posed on it. First, then p(x) defined in (135) is also well defined on R. Par- ; Second, Lie algebra structure. ticularly, the coefficients a can be chosen as a for i h ii a F, i =1, ,n. Then p(x) is as a “standard” poly- i ∈ ··· To begin with, we extend some fundamental concepts of nomial. Unless elsewhere is stated, in this paper only matrices to their equivalent classes. such standard polynomials are considered.

Definition 135 (1) A is nonsingular (symmetric, For any A Σ, the polynomial p( A ) is well defined, h i h i ∈ h i skew symmetric, positive/negative (semi-)definite, and it is clear that upper/lower (strictly) triangular, diagonal, etc.) if its irreducible element A1 is (equivalently, every Ai p( A )= p(A) , for any A A . (136) h i h i ∈ h i is). (2) A and B are similar, denoted by A B , if h i h i h i ∼ h i there exists a nonsingular P such that Using Taylor expansion, we can consider general matrix h i functions. For instance, we have the following result: 1 P − A P = B . (133) h i h i h i Theorem 138 Let f(x) be an analytic function. Then

(3) A and B are congruent, denoted by A B , f( A ) is well defined provided f(A) is well defined. More- h i h i h i ≃ h i h i if there exists a nonsingular P such that over, h i P T A P = B . (134) f( A )= f(A) . (137) h i h i h i h i h i

(4) J is called the of A , if the h i h i irreducible element J J is the Jordan normal In fact, the above result can be extended to multi- 1 ∈ h i form of A1. variable case.

35 Definition 139 Let F (x , , x ) be a k-variable ana- for the modifications the relationship between trace(A) 1 ··· k lytic function. Then F ( A , , A ) is a well posed and det(A) [13] remains available. h i1 ··· h ik expression, where A Σ, i = 1, , k. Assume A h ii ∈ ··· i ∈ A , i =1, , k, then F (A , , A ) is a realization of Proposition 142 Assume F = R (or F = C), A Σ, h ii ··· 1 ··· k h i ∈ F ( A , , A ). Particularly, if Ai r r, i, we then h i1 ··· h ik ∈ M × ∀ call F (A1, , Ak) a realization of F ( A 1 , , A k) ··· h i ··· h i Tr( A ) A on r-th leaf. e h i = Dt eh i . (143)   Similar to (137), we also have Proof. Let A0 A and A0 n n. Then ∈ h i ∈M × F ( A 1 , , A k)= F (A1, , Ak) , (138) h i ··· h i h ··· i 1 Tr( A ) 1 trace(A ) trace(A ) n e h i = e n 0 = e 0 provided F (A1, , Ak) is well defined. 1 ··· = etrace(A0) n = Dt(eA0) | | In the following we consider some fundamental matrix = Dt e A .  functions for Σ. We refer to [13] for the definitions and h i basic properties of some fundamental matrix functions.  ✷ Using these acknowledges, the following results are ob- vious: Next, we consider the characteristic polynomial of an equivalent class, they comes from standard matrix the- Theorem 140 Let A , B Σ (i.e., A, B are square h i h i ∈ ory [24]. matrices). Then the followings hold: Definition 143 Let A Σ. A A is its irreducible h i ∈ 1 ∈ h i (1) Assume A ⋉ B = B ⋉ A, then element. Then ± A B A B eh i ⋉ eh i = e . (139) h i p A (λ) := det(λ A1) (144) h i ⊢ A (2) If A is real skew symmetric, then eh i is orthogo- h i is called the characteristic polynomial of A . nal. h i 1 1 (3) Assume B is invertible, we denote B − = B− . h i The following result is an immediate consequence of the Then definition.

B −1 A B 1 A eh i h ih i = B − ⋉ eh i ⋉ B . (140) Theorem 144 (Cayley-Hamilton) Let p A be the h i h i h i characteristic polynomial of A . Then (4) Let A, B be closed enough to identity so that log(A) h i and log(B) are defined, and A ⋉ B = B ⋉ A. Then p A ( A )=0. (145) h i h i ± log( A ⋉ B ) = log( A ) log( B ). (141) Remark 145 (1) If we choose A = A I and cal- h i h i h i h i k 1 ⊗ k culate the characteristic polynomial of pAk , then Many known results for matrix functions can be ex- k pA (λ) = (pA (λ)) . So pA ( A )=0 is equivalent tended to Σ. For instance, it is easy to prove the follow- k 1 k h i to p ( A )=0. A1 h i ing Euler fromula: (2) Choosing any A A , the corresponding minimal i ∈ h i polynomials q (λ) are the same. So we have unique Proposition 141 Consider F = R and let A Σ. Ai h i ∈ minimal polynomial as q A (λ)= qAi (λ). Then the Euler formula holds. That is, h i

i A ± 5.2 Bundled Lie Algebra e h i = cos( A ) i sin( A ). (142) h i h i Consider the vector space of the equivalent classes of Recall the modification of trace and determinant in Def- square matrices Σ := Σ1, this section gives a Lie alge- initions 20 and 23. The following proposition shows that braic structure to it.

36 Definition 146 ([22]) A Lie algebra is a vector space g Assume A A , B B and C C are ir- 1 ∈ h i 1 ∈ h i 1 ∈ h i over some field F with a binary operation [ , ]: g g g, reducible, and A1 m m, B1 n n, and C1 · · × → ∈ M × ∈ M × ∈ satisfying r r. Let t = n m r. Then it is easy to verify that M × ∨ ∨

(1) (bi-linearity) [ A , [ B , C ]] = h i h i h i (150) (A1 It/m), [(B1 It/n), (C1 It/r)] . [αA + βB,C] = α[A, C]+ β[B, C]; ⊗ ⊗ ⊗ (146)   [C, αA + βB] = α[C, A]+ β[C,B], Similarly, we have

where α, β F. [ B , [ C , A ]] = ∈ h i h i h i (151) (2) (skew-symmetry) (B I ), [(C I ), (A I )] . 1 ⊗ t/n 1 ⊗ t/r 1 ⊗ t/m [A, B]= [B, A]; (147) [ C , [ A , B ]] =  − h i h i h i (152) (C I ), [(A I ), (B I )] . (3) (Jacobi Identity) 1 ⊗ t/r 1 ⊗ t/m 1 ⊗ t/n   Since (148) is true for any A, B, C gl(t, F), it is true [A, [B, C]] + [B, [C, A]] + [C, [A, B]] = 0, ∈ for A = A I , B = B I , and C = C I . (148) 1 ⊗ t/m 1 ⊗ t/n 1 ⊗ t/r A, B, C g. Using this fact and equations (150)–(152), we have ∀ ∈

Definition 147 Let (E,Pr,B) be a discrete bundle with [ A , [ B , C ]] ± [ B , [ C , A ]] ± [ C , [ A , B ]] h i h i h i h i h i h i h i h i h i leaves Ei, i =1, 2, . If ··· = (A I ), [(B I ), (C I )] 1 ⊗ t/m 1 ⊗ t/n 1 ⊗ t/r (1) (B, , ) is a Lie algebra; + (B I ), [(C I ), (A I )] ⊕ ⊗ 1 ⊗ t/n 1 ⊗ t/r 1 ⊗ t/m (2) (Ei, +, ) is a Lie algebra, i =1, 2, ; × ··· + (C1 It/r), [(A1 It/m), (B1 It/n)]  (3) The restriction P r Ei : (Ei, +, ) (B, , ) is a ⊗ ⊗ ⊗ | × → ⊕ ⊗ Lie algebra homomorphism, i =1, 2, , = 0 =0.  ··· h i then (B, , ) is called a bundled Lie algebra. ⊕ ⊗ Let the Lie algebraic structure on n be gl(n, F) (where

M ± (We refer to Definition 65 and Remark 68 for the concept F = R or F = C). It follows from the consistence of of discrete bundle.) and ⋉ with the equivalence that P r : gl(n, F) gl(F) → is a Lie algebra homomorphism. ✷ On vector space Σ we define an operation [ , ]: Σ Σ · · × → Σ as 5.3 Bundled Lie Sub-algebra

[ A , B ] := A ⋉ B B ⋉ A . (149) This section considers some useful Lie sub-algebras of h i h i h i h i ⊢ h i h i Lie algebra gl(F). Assume g is a Lie algebra, h g is ⊂ Then we have the following Lie algebra: a vector subspace. Then h is called a Lie sub-algebra if and only if [h,h] h. Theorem 148 The vector space Σ with Lie bracket [ , ] ⊂ · · defined in (149), is a bundled Lie algebra, denoted by Definition 149 Let (E,Pr,B) be a bundled Lie algebra. gl(F). if

i i i Proof. Let = ∞ , where = . Then it is (1) (H, , ) is a Lie sub-algebra of (B, , ); M1 ∪i=1M M M1 ⊕ ⊗ ⊕ ⊗ clear that ( , P r, Σ) is a discrete bundle. (2) (Fi, +, ) is a Lie sub-algebra of (Ei, +, ), i = M1 × × 1, 2, ;

± ··· Next, we prove (Σ, , [ , ]) is a Lie algebra. Equations (3) The restriction P r : (F , +, ) (H, , ) is a · · |Fi i × → ⊕ ⊗ (146) and (147) are obvious. We prove (148) only. Lie algebra homomorphism, i =1, 2, , ···

37 then (F,Pr,H) is called a bundled Lie sub-algebra of Definition 156 (E,Pr,B). d(F) := A gl(F) A is diagonal {h i ∈ | h i } It is well known that there are some useful Lie sub- algebras of Lie algebra gl(n, F). When gl(n, F), n, are is called the bundled diagonal algebra. ∀ generalized to the bundled Lie algebra gl(F) (over Σ), Bundled symplectic algebra the corresponding Lie sub-algebras are investigated one- • by-one in this section. Definition 157

Bundled orthogonal Lie sub-algebra sp(F) := A gl(F) A satisfies (153) • {h i ∈ | h i Definition 150 A Σ is said to be symmetric and A , n N , h i ∈ 1 2n 2n (skew symmetric) if AT = A (AT = A), A A . ∈M × ∈ } − ∀ ∈ h i The symmetric (skew symmetric) A is well defined h i is called the bundled symplectic algebra. because if A B and AT = A (or AT = A), then

∼ − ± so is B. It is also easy to verify the following: J ⋉ A A T ⋉ J =0, (153) Proposition 151 Assume A and B are skew sym- h i h i h i h i h i h i metric, then so is [ A , B ]. where h i h i We, therefore, can define the following bundled Lie 0 1 sub-algebra. J = .  1 0 Definition 152 −   o(F) := A gl(F) A T = A Definition 158 A Lie sub-algebra is called an h i ∈ | h i − h i J ⊂ G n o idea, if is called the bundled orthogonal algebra. [g, ] . (154) Bundled special J ∈ J • Definition 153 Example 159 sl(F) is an idea of gl(F). Because

sl(F) := A gl(F) Tr( A )=0 Tr[g,h] = Tr(g⋉h h⋉g)=0, g gl(F), h sl(F). {h i ∈ | h i } ⊢ ∀ ∈ ∀ ∈ is called the bundled special linear algebra. Hence, [gl(F),sl(F)] sl(F). Similar to the case of orthogonal algebra, it is easy ⊂ to verify that sl(F) is a Lie sub-algebra of gl(F). Many properties of the sub-algebra of gl(n, F) can be extended to the sub-algebra of gl(F). The following is an Bundled upper triangular algebra • example. Definition 154 Proposition 160 t(F) := A gl(F) A is upper triangular {h i ∈ | h i } gl(F)= sl(F) ± r 1 , r F. (155) is called the bundled upper triangular algebra. h i ∈ Similarly, we can define bundled lower triangular algebras. Proof. It is obvious that Bundled strictly upper triangular algebra • A = ( A Tr( A )) ± Tr( A ) 1 . Definition 155 h i h i⊢ h i h i h i Since Tr ( A Tr( A )) = 0, which means n(F) := A gl(F) A is strictly upper triangular h i⊢ h i {h i ∈ | h i } ( A Tr( A )) sl(F). is called the bundled strictly upper triangular algebra. h i⊢ h i ∈ Bundled diagonal algebra The conclusion follows. ✷ •

38 Example 161 (1) Denote by is, the following (160) is commutative:

± [ ,k] π gl( A , F) := X X A A X =0 . · h i h i h i h i h i h i M −−−−→ M (156)  P r P r (160) ↓ ↓ ′ [ ,k] π It is obvious that gl( A , F) is a vector sub-space of Σ · Σ h i −−−−→ gl(F). Let X , Y gl( A , F). Then h i h i ∈ h i Where π and π′ are including mappings. [ ,p] ± ± (3) It is ready to verify that Σ is closed with [ X , Y ] A A [ X , Y ]T · h i h i h i h i h i h i and [ , ], defined in (149), hence the including · · [ ,p] = X Y A Y X A mapping π′ : Σ · Σ is a Lie algebra ho- h i h i h i ⊢ h i h i h i → [ ,p] momorphism. Identifying A · with its image + A Y T XT XT Y T A [ ,p] h[ ,pi] h i ⊢ h i A = π′ A · , then Σ · becomes a Lie sub- = X Y A Y X A h i h i h i h i h i ⊢ h i h i h i algebra ofgl(F). We denote this as + A Y X A X Y A [ ,p] [ ,p] ± h i h i h i h i ⊢ h i h i h i gl · (F) := Σ · , , [ , ] , (161) =0. · ·   [ ,p] and call gl · (F) the p-truncated Lie sub-algebra of Hence, gl( A , F) gl(F) is a bundled Lie sub- gl(F). h i ⊂ algebra. (4) Let Γ gl(F) be a Lie sub-algebra. Then its p- ⊂ (2) Assume A and B are congruent. That is, truncated sub-algebra Γ[ ,p] is defined in a similar h i h i · there exists a non-singular P such that A = way as for gl[ ,p]. Alternatively, it can be considered h i h i · P T B P . Then it is easy to verify that as h i h i π : gl( A , F) gl( B , F) is an isomorphism, h i → h i [ ,p] [ ,p] where Γ · := Γ Σ · . (162) T π( X )= P − X P . \ h i h i h i Definition 164 ([26]) Let g be a Lie algebra.

5.4 Further Properties of gl(F) (1) Denote the derived serious as (g) := [g,g], and D (k+1)(g) := k(g) , k =1, 2, . Definition 162 Let p N. D D D ··· ∈  Then g is solvable, if there exists an n N such that ∈ (1) The p-truncated equivalent class is defined as (n)(g)= 0 . D { } (2) Denote the descending central series as (g) := [ ,p] C A · := Ai A i p . (157) [g,g], and h i ∈ h i |  (2) The p-truncated square matrices is defined as (k+1)(g) := g, (k)(g) , k =1, 2, . C C ··· h i [ ,p] The g is nilpotent, if there exists an n N such that · := A n n n p . (158) ∈ M ∈M × | (n)(g)= 0 .  C { } (3) The p-truncated equivalence space is defined as Definition 165 Let Γ gl(F) be a sub-algebra of gl(F). ⊂ [ ,p] [ ,p] Σ · := · / . (159) M ∼ (1) Γ is solvable, if for any p N, the truncated sub- [ ,p] ∈ algebra Γ · is solvable. Remark 163 (1) If A1 A is irreducible, A1 (2) Γ is nilpotent, if for any p N, the truncated sub- ∈ h i ∈ [ ,p] [ ,p] ∈ n n and n is not a divisor of p, then A · = . algebra Γ · is nilpotent. M × [ ,p] [ ,p] h i ∅ (2) It is obvious that · , P r, Σ · is a discrete M bundle, which is a sub-bundle of ( , P r, Σ). That Definition 166 ([26]) Let g be a Lie algebra. M

39 (1) g is simple if it has no non-trivial idea, that is, the Example 170 Consider A Σ. Assume A is nilpo- h i ∈ h i only ideas are 0 and g itself. tent, that is, there is a k > 0 such that A k = 0. Then { } h i (2) g is semi-simple if it has no solvable idea except 0 . ad A is also nilpotent. { } h i k k Though the following proposition is simple, it is also Note that A =0 if and only if (A Is) =0. Similarly, k k ⊗ fundamental. adA = 0 if and only if adA Is = 0. Hence, we need ⊗ only to show that adA is nilpotent, where A A and [ ,p] ∈ h i Proposition 167 The Lie algebra gl · (F) is isomor- A n n. Using the definition that phic to the classical linear algebra gl(p, F). ∈M × adA B = AB BA, [ ,p] − Proof. First we construct a mapping π : gl · (F) [ ,p] → gl(p, F) as follows: Assume A gl · (F) and A A a straightforward computation shows that h i ∈ 1 ∈ h i is irreducible. Say, A1 n n, then by definition, n p. ∈M × | m Denote by s = p/n, then define m i m m i i adA B = ( 1) A − BA , B n n. − i ∀ ∈M × i=0   π( A ) := A1 Is gl(p, F). X h i ⊗ ∈ m As m =2k 1, it is clear that adA B =0, B n n. [ ,p] − ∀ ∈M × Set π′ : gl(p, F) gl · (F) as It follows that → 2k 1 adA − =0. [ ,p] π′(B) := B gl · (F), h i ∈ Definition 171 (1) Let A, B gl(n, F). Then the ∈ then it is ready to verify that π is a bijective mapping Killing form ( , )K : gl(n, F) gl(n, F) F is de- 1 · · × → and π− = π′. fined as (We refer to [26] for original definition. The following definition is with a mild modification.) By the definitions of ± and [ , ], it is obvious that π is a · · ✷ Lie algebra isomorphism. (A, B)K := Tr(adA adB). (165)

The following properties are available for classical (2) Assume A , B gl(F). The killing form ( , ) : h i h i ∈ · · K gl(n, F) [26], [46]. Using Proposition 167, it is easy to gl(F) gl(F) F is defined as verify that they are also available for gl(F). × →

( A , B )K := Tr ad A ⋉ ad B . (166) Proposition 168 Let g gl(F) be a Lie sub-algebra. h i h i h i h i ⊂  (1) If g is nilpotent then it is solvable. To see the killing form is well defined, we also need to (2) If g is solvable (or nilpotent) then so is its sub- prove algebra, its homomorphic image. (3) If h g is an idea of g and h and g/h are solvable, ( A , B )K = (A, B)K , A A , B B . (167) ⊂ h i h i ∈ h i ∈ h i then g is also solvable. Similar to (164), it can be verified by a straightforward Definition 169 Let A gl(F). The adjoint represen- calculation. h i ∈ tation ad A : gl(F) gl(F) is defined as h i → Because of the equations (164) and (167), the following

ad A B = [ A , B ]. (163) properties of finite dimensional Lie algebras [46] can eas- h i h i h i h i ily be extended to gl(F):

To see (163) is well defined, we have to prove that Proposition 172 Consider g = gl(F). Let A , A , h i h i1 A , B , E g, c , c F. Then h i2 h i h i ∈ 1 2 ∈ ad A B = adA B , A A , B B . (164) h i h i h i ∈ h i h i ∈ h i (1) It follows from the consistence of ⋉ and ± ( ) with ⊢ ∼ℓ immediately. ( A , B ) = ( B , A ) . (168) h i h i K h i h i K

40 (2) Corollary 176 Assume g gl(F) is a Lie sub-algebra. ⊂ g is solvable if and only if, D(g) is nilpotent. ± (c1 A 1 c2 A 2 , B )K h i h i h i Example 177 Consider the bundled Lie sub-algebras = c ( A , B ) ± c ( A ,B) , (169) 1 h i1 h i K 2 h i2 K t(F) and n(F). It is easy to verify the following: c1, c2 F. ∈ (1) t(F) is solvable; (3) (2) n(F) is nilpotent.

± Even though gl(F) is an infinite dimensional Lie alge- ad A B , E B , ad A E =0. h i h i h i K h i h i h i K (170) bra, it has almost all the properties of finite dimensional   Lie algebras. This claim can be verified one by one eas- (4) Let h g be an idea of g, and A , B h. Then ily. The reason is: gl(F) is essentially a union of finite ⊂ h i h i ∈ dimensional Lie algebras. ( A , B ) = ( A , B )h . (171) h i h i K h i h i K 6 Lie Group on Nonsingular M-equivalence The right hand side means the Killing form on the Space ideal h. (5) A sub-algebra ξ g is semi-simple, if and only if, ⊂ 6.1 Bundled Lie Group its Killing form is non-degenerated.

The Engel theorem can easily be extended to gl(F): Consider Σ := Σ1, we define a subset

Theorem 173 Let 0 = g gl(F) be a bundled Lie GL(F) := A Σ Dt( A ) =0 . (172) { } 6 ⊂ {h i ∈ | h i 6 } sub-algebra. Assume each A g is nilpotent, (i.e., for h i ∈ each A g there exists a k > 0 such that A k = h i ∈ h i We emphasize the fact that GL(F) is an open subset of Ak =0). Σ. For an open subset of bundled manifolds we have the

(1) If g is finitely generated, then there exists a vector following result. X =0 (of suitable dimension) such that 6 Proposition 178 Let M be a bundled manifold, and N G ⋉ X =0, G g. an open subset of M. Then N is also a bundled manifold. ∀ ∈ (2) g is nilpotent. Proof. It is enough to construct an open cover of N. Starting from the open cover of M, which is denoted as Definition 174 ([26]) Let V be an n-dimensional vec- = U λ Λ , tor space. A flag in V is a chain of subspaces C { λ | ∈ }

0= V V V V = V, we construct 0 ⊂ 1 ⊂ 2 ⊂···⊂ n with dim(V )= i. Let A End(V ) be an endomorphism N := Uλ N Uλ , λ Λ i ∈ C { ∩ | ∈C ∈ } of V . A is said to stabilize this flag if r ω Then we can prove that it is C (C∞ or C ) comparable AV V , i =1, , n. as long as is. Verifying other conditions is trivial. ✷ i ⊂ i ··· C Corollary 179 GL(F) is a bundled manifold. Lie theorem can be extended to gl(F) as follows. Note that, Proposition 49 shows that ⋉ : GL(F) × Theorem 175 Assume g gl(F) is a solvable Lie sub- GL(F) GL(F) is well define. ⊂ algebra. Then for any p > 0 there is a flag of ideals → [ ,p] 0 = , such that the truncated g · Definition 180 A topological space G is a bundled Lie I0 ⊂ I1 ⊂···⊂Ip stabilizes the flag. group, if

41 (1) it is a bundled analytic manifold; (3) Set the image set as P r(GL(s, F)) := Ψs. Then (2) it is a group; Ψs < GL(F) is a Lie sub-group. Moreover, (3) the product A B AB and the inverse mapping × → 1 P r : GL(s, F) Ψs (175) A A− are analytic. s → W → is a Lie group isomorphism. The following result is an immediate consequence of the definition. Definition 183 A vector field ξ V (GL(F)), (where h i ∈ for each P GL(F), ξ(P ) TP (GL(F))), is called a Theorem 181 GL(F) is a bundled Lie group. ∈ h i ∈ left-invariant vector field, if for any A GL(F) h i ∈ Proof. We already known that GL(F) is a bundled an- L A ( ξ (P )) := (LA) (ξ(P )) alytic manifold. We first prove GL(F) is a group. It h i ∗ h i h ∗ i is ready to verify that 1 is the identity. Moreover,  = ξ(AP ) = ξ (AP ). 1 1 h i h i h i A − = A− . The conclusion follows. h i

Using a simple coordinate chart, it is obvious that the Then it is easy to verify the following relationship be- inverse and product are two analytic mappings. ✷ tween GL(F) and gl(F): Theorem 184 The corresponding Lie algebra of the 6.2 Relationship with gl(F) bundled Lie group GL(F) is gl(F) in the following natural sense: Denote by LhAi ( )∗ gl(F) T 1 (GL(F)) T A (GL(F)) . := A det(A) =0 , ≃ h i −−−−−→ h i W { ∈M1 | 6 } That is, gl(F) is a Lie algebra isomorphic to the Lie and algebra consists of the vectors on the tangent space of s := s s, s =1, 2, . GL(F) at identity. Then these vectors generate the left- W W∩M × ··· invariant vector fields which form the tangent space at Consider the bundle: ( , P r, Σ), where the map P r : any A GL(F). M1 h i ∈ A A is the national projection. It has a natural sub- 7→ h i bundle: ( , P r, GL(F)) via the following bundle mor- Let P r : n n Σ be the natural mapping A A . W M × → 7→ h i phism as Then we have the following commutative picture:

exp π gl(n, F) GL(n, F) −−−−−−→ W −−−−→ M1 P r P r (176) P r P r (173) ↓ ↓ ↓ ↓ exp π′ gl(F) GL(F) GL(F) Σ −−−−−−→ −−−−→ where n =1, 2, . Recall (16), we know that the expo- ··· nential mapping exp is well defined X gl(F). In fact, the projection leads to Lie group homomor- ∀ h i ∈ phism. Graph (176) also shows the relationship between gl(F) and GL(F), which is a generalization of the relationship Theorem 182 (1) With natural group and differential between gl(n, F) and GL(n, F). structures, = GL(s, F) is a Lie group. Ws (2) Consider the projection P r. Restrict it to each leaf 6.3 Lie Subgroups of GL(F) yields It has been discussed that gl(F) has some useful Lie sub- P r : GL(s, F) GL(F). (174) s algebras. It is obvious that GL(F) has some Lie sub- W → groups, corresponding to those sub-algebras of gl(F). Then P r is a Lie group homomorphism. They are briefly discussed as follows. s W

42 Bundled orthogonal Lie sub-group Definition 193 • Definition 185 A GL(F) is said to be orthogo- T 1 h i ∈ N(F) := A T (F) det( A )=1 nal, if A = A− . h i ∈ h i  It is also easy to verify the following: is called the bundled special upper triangular group. Proposition 186 Assume A and B are orthogo- h i h i Proposition 194 Consider the bundled special upper nal, then so is A ⋉ B . h i h i triangular group. We, therefore, can define the following bundled Lie (1) N(F) is a Lie sub-group of T (F), i.e., N(F) < sub-group of GL(F) as follows. T (F) < GL(F). Definition 187 (2) The Lie algebra of N(F) is n(F).

T 1 O(F) := A GL(F) A = A − Bundled symplectic group h i ∈ h i h i • n o Definition 195 is called the bundled orthogonal group. It is easy to verify the following proposition: SP (F) := A GL(F) A1 2n Proposition 188 Consider the bundled orthogonal h i ∈ ∈M  satisfies (177) , n N , group. ∈ } (1) O(F) is a Lie sub-group of GL(F), i.e., O(F) < GL(F). is called the bundled symplectic group.

(2) T SO(F) := A O(F) det( A )=1 . A J A = J , (177) h i ∈ h i h i h i h i h i Then SO(F) < O (F) < GL(F ). where J is defined in (29). (3) The Lie algebra for both O(F) and SO(F) is o(F). Bundled special linear group Proposition 196 Consider the bundled symplectic • group. Definition 189 (1) SP (F) is a Lie sub-group of GL(F), i.e., SP (F) < SL(F) := A GL(F) det( A )=1 h i ∈ h i GL(F).  (2) The Lie algebra of SP (F) is sp(F). is called the bundled special linear group. Similar to the case of orthogonal algebra, it is easy 6.4 Symmetric Group to verify the following: Proposition 190 Consider the bundled special linear Let S be the k-th order symmetric group. Denote group. k

(1) SL(F) is a Lie sub-group of GL(F), i.e., SL(F) < ∞ GL(F). S := Sk. (2) The Lie algebra of SL(F) is sl(F). k[=1

Bundled upper triangular group Definition 197 A matrix A k is called a permuta- ∈M • tion matrix, if Col(A) ∆ and Col(AT ) ∆ . The set Definition 191 ⊂ k ⊂ k of k k permutation matrices is denoted by . × Pk T (F) := A GL(F) A is upper triangular h i ∈ h i Proposition 198 Consider the set of permutation ma-  trix. is called the bundled upper triangular group. Proposition 192 Consider the bundled upper trian- (1) If P , then gular group. ∈Pk (1) T (F) is a Lie sub-group of GL(F), i.e., T (F) < T 1 P = P − . (178) GL(F). (2) The Lie algebra of T (F) is t(F). (2) Bundled special upper triangular group k < O(k, R) < GL(k, R). • P

43 Let σ S . Define a M as 7.1 Equivalence of Vectors of Different Dimensions ∈ k σ ∈ Pk Mσ := [mi,j ], where Consider the set of vectors on field F. Denote it as 1, σ(j)= i m = (179) i,j ∞ (0, otherwise. := , (183) V Vi i=1 [ The following proposition is easily verifiable. where is the i-dimensional vector space, which is a Vi subset of . Proposition 199 Define π : S , where π(σ) := V k → Pk Mσ k is constructed by (179). Then π is an isomor- Our purpose is to build a vector space structure on , ∈P V phism. precisely speaking, on equivalence classes of . To this V end, we first propose an equivalence relation. Assume σ, λ S . Then Proposition 199 leads to ∈ k Definition 202 (1) Let X, Y . X and Y are said Mσ λ = MσMλ. (180) ∈ V ◦ to be V-equivalent, denoted by X Y , if there exist ↔ two one-vectors 1s and 1t such that Next, assume σ S , λ S , we try to generalize (180). ∈ m ∈ n X 1s = Y 1t. (184) Definition 200 Assume σ S , λ S . The (left) ⊗ ⊗ ∈ m ∈ n STP of σ and λ is defined by (2) The equivalent class is denoted as

Mσ⋉λ = Mσ ⋉ Mλ t, (181) X := Y Y X . ∈P ⌈ ⌉ ↔ where t = m n. That is,  ∨ (3) In an equivalent class X a partial order (6) is ⌈ ⌉ 1 defined as: X 6 Y , if there exists a one-vector 1s σ ⋉ λ := π− (Mσ ⋉ Mλ) St. (182) ∈ such that X 1 = Y . X X is irreducible, if ⊗ s 1 ∈ ⌈ ⌉ there are no Y and 1s, s> 1, such that X1 = Y 1s. Similarly, we can define the right STP of σ and λ. ⊗ Remark 203 (1) The equivalence defined above can Now, it is clear that (S, ⋉) < ( , ⋉) is a sub-monoid. M1 be seen as the left equivalence. Formally, we set To get a bundled Lie subgroup structure, we consider := . ↔ ↔l the quotient space (2) The right equivalence can be defined as follows: Let X, Y . X and Y are said to be right equivalent, := (S, ⋉) / ℓ . ∈V D ∼ denoted by X r Y , if there exist two one-vectors ↔ 1s and 1t such that Then we have the following: 1 X = 1 Y. (185) s ⊗ t ⊗ Theorem 201 is a discrete bundled sub-Lie group of D GL(F). (3) The right equivalent class of X is denoted as X . ⌈ ⌉r Of course, we have X l = X . may be used to investigate the permutation of uncer- ⌈ ⌉ ⌈ ⌉ D (4) Hereafter, the left equivalence is considered as the tain number of elements. defaulted one. That is, we always assume = . ↔ ↔l But with some obvious modifications one sees eas- 7 V-equivalence ily that the arguments/results in the sequel are also valid for right equivalence. This section considers the vector equivalence (V- equivalence) of vectors. Since many results are parallel The following properties of the M-equivalence are also and simpler than M-equivalence, some detailed discus- true for V-equivalence. The proofs are also similar to the sions are omitted. M-equivalence. Therefore, they are omitted.

44 Theorem 204 (1) If X Y , then there exists a vector Next we investigate the lattice structure on . Consider ↔ V Γ such that and , where i j and k = j/i. Then we have 1 Vi Vj | Vi ⊗ k ⊂ . This order is denoted by Vj X =Γ 1β, Y =Γ 1α . (186) ⊗ ⊗ . Vi ⊑Vj (2) In each class X there exists a unique X X , ⌈ ⌉ 1 ∈ ⌈ ⌉ Using this order, the following result is obvious. such that X1 is irreducible. Proposition 208 ( , ) is a lattice with Remark 205 (1) If X = Y 1 , then Y is called a V ⊑ ⊗ s divisor of X and X is called a multiple of Y . This sup( i, j )= i j , relation determined the order Y 6 X. V V V ∨ (190) (2) If (186) holds and α, β are co-prime, then the Γ sat- inf( i, j )= i j . V V V ∧ isfying (186) is called the greatest common divisor Proposition 209 Assume X has its irreducible vec- of X and Y . Moreover, Γ= gcd(X, Y ) is unique. ⌈ ⌉ tor X Ri. Define ϕ : X as ϕ(X ) := . Then (3) If (184) holds and s, t are co-prime, then 1 ∈ ⌈ ⌉→V 1 Vi ϕ : ( X , ) ( , ) is a lattice homomorphism. ⌈ ⌉ ≤ → V ⊑ Ξ := X 1 = Y 1 (187) ⊗ s ⊗ t ( X , ) ( , ). (191) ⌈ ⌉ ≤ ≈ V ⊑ is called the least common multiple of X and Y . Moreover, Ξ = lcm(X, Y ) is unique. We also have the following isomorphic relation: (4) Consider an equivalent class X , denote the unique ⌈ ⌉ Proposition 210 Define ϕ : as: irreducible element by X1, which is called the root V→Mµ element. All the elements in X can be expressed as ⌈ ⌉ ϕ ( ) := i . (192) Vi Mµ X = X 1 , i =1, 2, . (188) i 1 ⊗ i ··· Then ϕ is a lattice isomorphism.

Xi is called the i-th element of X . Hence, an equiv- Definition 211 (1) Let p N. The p-lower truncated ⌈ ⌉ ∈ alent class X is a well ordered sequence as: vector space is defined as ⌈ ⌉ [p, ] X = X , X , X , . · := s. (193) ⌈ ⌉ { 1 2 3 ···} V V s p s { [| }

We also have a lattice structure on X = X ,X , : (2) The quotient space of / is denoted as ⌈ ⌉ { 1 2 ···} V ↔ Ω := X X . (194) Proposition 206 ( X , ) is a lattice. V ⌈ ⌉ ∈V ⌈ ⌉ ≤  (3) The subspace Proof. It is easy to verify that for U, V X , ∈⌈ ⌉ i [i, ] Ω := · / = X X1 i . (195) V V ↔ ⌈ ⌉ ∈V sup(U, V )= lcm(U, V ); inf(U, V )= gcd(U, V ). 

Consider an equivalence X . Let X X be the ir- The conclusion follows. ✷ ⌈ ⌉ 1 ∈ ⌈ ⌉ reducible element and dim(X1) = r. Then we define a mapping ψ : X as Proposition 207 Let A1 and X1 be both ⌈ ⌉→V ∈ M ∈ V irreducible. Then the two lattices A and X , generated h i ⌈ ⌉ ψ(Xi)= ir, i =1, 2, (196) by A1 and X1 are isomorphic. Precisely, V ··· Similar to matrix case, we have ( A , ) ≅ ( X , 6) . (189) h i ≺ ⌈ ⌉ Proposition 212 Let ψ : X be defined by (196). ⌈ ⌉→V The isomorphism is: ϕ : A = A I X = X 1 . Then ψ : ( X , ) ( , ) is a lattice homomorphism. s 1 ⊗ s 7→ s 1 ⊗ s ⌈ ⌉ ≤ → V ⊑

45 7.2 Vector Space Structure on V-equivalence Space Hence X~± Y Γ~± Π. Similarly, we can show that

± ± ↔ X˜~ Y˜ Γ~ Π. The conclusion follows. ✷ ↔ To begin with, we define an addition between vectors of different dimensions. Corollary 216 The vector addition ~± (or subtraction ~) is well defined on the quotient space Ω , as well as ⊢ i V Definition 213 Let X p and Y q. t = p q. Ω , i =1, 2, . That is, ∈ V ∈ V ∨ V ··· Then

~± ~± ± X Y := X Y , (1) the vector addition ~ : of X and Y is ⌈ ⌉ ⌈ ⌉ ⌈ ⌉ (201) V×V→V defined as X, Y , (or X , Y Ω ) . ∈V ⌈ ⌉ ⌈ ⌉ ∈ V

X~± Y := X 1 + Y 1 ; (197) ⊗ t/p ⊗ t/q i   Let X Ω (or X Ω ). Then we define a scale (2) the subtraction ~ is defined as ⌈ ⌉ ∈ V ⌈ ⌉ ∈ V ⊢ product

~ ~± X Y := X ( Y ). (198) a X := aX , a F. (202) ⊢ − ⌈ ⌉ ⌈ ⌉ ∈ Remark 214 Let X T and Y T be two row ∈ Vp ∈ Vq vectors. Then we define Using (201) and (202), one sees easily that Ω becomes V a vector space: (1) the vector addition as

± Theorem 217 Using the vector addition defined in X~ Y := X 1T + Y 1T ; (199) ⊗ t/p ⊗ t/q (201) and the scale product defined in (202), we have     (2) the subtraction as (1) Ω is a vector space over F; Vi ~ ~± (2) Ω , i =1, 2, are the subspaces of Ω ; X Y := X ( Y ). (200) V ··· V ⊢ − (3) if i j, then Ωi is a subspace of Ωj . | V V Proposition 215 The vector addition ~± is consistent ˜ ˜ Let E be a set of vectors. Then with the equivalence . That is, if X X and Y Y , ⊂V ± ± ↔ ↔ ↔ then X~ Y X˜~ Y˜ . ↔ E := X X E . (203) ⌈ ⌉ ⌈ ⌉ ∈ ˜ Proof. Since X X, according to Theorem 204, there  ↔ exists Γ, say, Γ p, such that ∈V The following proposition shows that the vector equiva- X =Γ 1 , X˜ =Γ 1 . lence keeps the space-subspace relationship unchanged. ⊗ α ⊗ β

Similarly, there exists Π, say, Π q, such that Proposition 218 Assume E is a subspace of . ∈V ⊂ Vi Vi Then E Ωi is a subspace of Ωi . Y = Π 1 , Y˜ = Π 1 . ⌈ ⌉⊂ V V ⊗ s ⊗ t Definition 213 can be translated to its corresponding Let ξ = p q, η = pα sq, and η = ξℓ. Then ∨ ∨ right one as follows: ± ± X~ Y = (Γ 1α) ~ (Π 1s) Definition 219 Let X and Y . t = p q. ⊗ ⊗ ∈ Vp ∈ Vq ∨ Then = Γ 1 1 + Π 1 1 ⊗ α ⊗ η/(αp) ⊗ s ⊗ η/(sq) = Γ 1 + Π  1   ~ η/p η/q (1) the (right) vector addition ± : of X and ⊗ ⊗ V×V→V =  Γ 1  + Π 1  1 Y is defined as ⊗ ξ/p ⊗ ξ/q ⊗ ℓ ± = (Γ ~ Π) 1ℓ .   ~ X ± Y := 1 X + 1 Y ; (204) ⊗ t/p ⊗ t/q ⊗  

46 (2) the subtraction ~ is defined as Similarly, X Ω can also determine a linear mapping ⊣ ⌈ ⌉ ∈ V ϕ X : Ω F as ~ ~ ⌈ ⌉ V → X Y := X ± ( Y ). (205) ⊣ − ϕ X : Y X Y . Then all the arguments in this subsection and the follow- ⌈ ⌉ ⌈ ⌉ 7→ ⌈ ⌉ ⌈ ⌉ ing several subsections can be stated in a parallel way Unfortunately, the inverse is not true. Because Ω is an V for right addition and corresponding linear spaces. infinite dimensional vector space but any vector in Σ has only finite dimensional representatives. 7.3 Inner Product and Linear Mappings Next, we consider the linear mappings on . It is well Definition 220 Let X m , Y n , and V ∈ V ⊂ V ∈ V ⊂ V known that assume A m n and X n. Then the t = m n. Then the weighted inner product is defied as ∈M × ∈ V ∨ product : m n n m, defined as (A, X) × M × ×V → V 7→ 1 AX, for given A is a linear mapping. We intend to gen- X Y := X 1 Y 1 , (206) W t ⊗ t/m ⊗ t/n eralize such a linear mapping to arbitrary matrix and arbitrary vector. where X Y is the conventional inner product. Say, if F = R, X Y = XT Y , and if F = C, X Y = Definition 224 Let A m n, X p, and t = n p. X¯ T Y . ∈M × ∈V ∨ Then the vector product, denoted by ⋉~ , is defined as

Definition 221 Let X , Y Ω . Their inner prod- ⌈ ⌉ ⌈ ⌉ ∈ V A⋉~ X := A It/n X 1t/p . (208) uct is defined as ⊗ ⊗   Remark 225 (1) The vector product defined in (208) X Y := X Y , X X , Y Y . ⌈ ⌉ ⌈ ⌉ W ∈⌈ ⌉ ∈⌈ ⌉ is the left vector product. Of course, we can define (207) ~ right vector product, denoted by ⋊, defined as

It is easy to verify the following proposition, which as- A⋊~ X := It/n A 1t/p X . (209) ⊗ ⊗ sures that the Definition 221 is reasonable.   (2) In fact, ⋉~ is a combination of of matrices and ∼ℓ ↔ℓ Proposition 222 Equation (207) is well defined. That of vectors, and ⋊~ is a combination of of matrices ∼r is, it is independent of the choice of X and Y . and of vectors. Of course, we may define two ↔r more vector products by combinations of l with r Since Ω is a vector space, using (207) as an inner prod- ∼ ↔ V and r with l respectively. uct on Ω , then we have the follows: ∼ ↔ ~ V (3) Note that when n = p, A⋉X = AX. That is, the linear mapping defined in (208) is a generalization Proposition 223 The Ω with inner product defined by V of conventional linear mapping. It is also true for (207) is an inner product space. It is not a Hilbert space. other vector products. (4) To avoid similar but tedious arguments, hereafter Proof. The first part is obvious. As for the second part, the default vector product is ⋉~ . we construct a sequence as

X = a F The following proposition is easily verifiable. 1 ∈ 1 1 2 (Xi+1 = Xi 12 + i+1 δ i+1 δ i+1 , i =1, 2, . ⊗ 2 2 − 2 ··· Proposition 226 Consider the vector product  Then we can prove that X is a Cauchy sequence and { i} ~ it does not converge to any X . ✷ ⋉ : . ∈V M×V→V Given a vector X , then it determined a - (1) It is linear with respect to the second variable, pre- ∈V ping ϕ : F via inner product as cisely, X V → ϕ : Y X Y . A⋉~ (aX~± bY )= aA⋉~ X~± bA⋉~ Y, a,b F. (210) X 7→ W ∈

47 (2) Assume both A, B , then the vector product is (2) The equivalence class of V is denoted as ∈Mµ also linear with respect to the first variable, precisely, V := W W V . (214)

± ± ⌈ ⌉ ↔ (aA bB)⋉~ X = aA⋉~ X~ bB⋉~ X. (211)  Remark 230 (1) It is clear the equivalence defined by The following proposition shows that the vector prod- (213) is left vector equivalence . The right vector ↔ℓ uct is consistent with both M-equivalence and V- equivalence can be defined similarly. Moreover, ↔r equivalence. the right vector equivalence class is denoted by . ⌈·⌉r (2) A matrix can be considered as a linear mapping, Proposition 227 Assume A B and X Y . Then and correspondingly the M-equivalence is consid- ∼ ↔ ered. A matrix can also be considered as a vector A⋉~ X B⋉~ Y. (212) subspace generated by its columns. In this way its V- ↔ equivalence, defined by (213)–(214), makes sense. Denote the equivalence class as Proof. Assume A = Λ I , B = Λ I ; X = Γ 1 , ⊗ s ⊗ α ⊗ t Y = Γ 1β, where Λ n p and Γ q. Denote Ω := / , ⊗ ∈ M × ∈ V ξ = p q, η = ps qt, and η = kξ. Then we have M M ↔ ∨ ∨ and n A⋉~ X = (Λ Is) ⋉~ (Γ 1t) Ω := n/ . ⊗ ⊗ M M·× ↔ = Λ Is Iη/ps Γ 1t 1η/qt ⊗ ⊗ ⊗ ⊗ All the results about vector space can be extended to = Λ Iξ/p Ik  Γ 1ξ/q 1k  V n for n N. They are briefly summarized as follows: ⊗ ⊗ ⊗ ⊗ M·× ∈ = Λ I Γ 1 [I 1 ] ⊗ ξ/p ⊗ ξ/q ⊗ k k Proposition 231 (1) Assume V r n, W × = Λ⋉~ Γ 1k .  ∈ M ∈ s n, V W and r s. Then we said V is a di- ⊗ M × ↔ |  visor of W or W is a multiple of V . Moreover, an Hence order determined by this is denoted as V W . ⊑ A⋉~ X Λ⋉~ Γ. (2) ( V , ) is a lattice. ↔ ⌈ ⌉ ⊑ Similarly, we have (3) Assume r s, then an order is given in n as | M·×

B⋉~ Y Λ⋉~ Γ. r n s n. ↔ M × ⊑M ×

Equation (212) follows. ✷ (4) ( n, ) is a lattice. M·× ⊑ (5) Assume V r n is irreducible. then ϕ : V ∈ M × ⌈ ⌉ → The above propositions have an immediate consequence n defined as Vi ir n is a lattice homomor- M·× 7→ M ⊗ as follows: phism.

Corollary 228 The vector product ⋉~ : can The operators ~± and ⋉~ can also be defined in a similar M×V →V be extended to ⋉~ : Σ Ω Ω . Particularly, each way as for vectors: M × V → V A Σ determines a linear mapping on the vector h i ∈ M space Ω . Definition 232 (1) Assume V p n and W V ∈ M × ∈ q n, t = p q. The vector addition is defined as M × ∨ Next, we extend the vector equivalence to matrices. V ~± W := V 1 + W 1 . ⊗ t/p ⊗ t/q Definition 229 (1) Assume V, W n. V and W ∈M·×   are said to be vector equivalent, denoted by V W , (2) Let A m n, V p q, and t = n p. Then ↔ ∈ M × ∈ M × ∨ if there exist 1s and 1t such that the vector product of A and V is defined as

V 1 = W 1 . (213) A⋉~ V := A I V 1 . (215) ⊗ s ⊗ t ⊗ t/n ⊗ t/p  

48 We can denote the equivalence class of n as 7.4 Type-1 Invariant Subspace M·×

n i Σ := n/ . Given A µ, we seek a subspace S which is A M M·× ↔ ∈ M ⊂ V invariant.

Definition 235 Let S be a vector subspace. If For vector product and vector addition we also have the ⊂V distributive law with respect to these two operators for A⋉~ S S, matrix case. ⊂ S is called an A-invariant subspace. Moreover, if S , Proposition 233 ⋉~ : is distributive. ⊂Vt M×M→M it is called the type-1 invariant subspace; Otherwise, it is Precisely, Let A, B µ and C n β, D p β. ∈M ∈M × ∈M × called the type-2 invariant subspace. Then

± ± This subsection considers the type-1 invariant subspace (aA bB)⋉~ C = aA⋉~ C~ bB⋉~ C, a,b F; (216) only.

± ± ∈ A⋉~ (aC~ bD)= aA⋉~ C~ bA⋉~ D, a,b F. (217) ∈ Proposition 236 Let A and S = . Then S is ∈Mµ Vt A-invariant, if and only if, Proof. We prove (216). The proof of (217) is similar. Denote m n = s, m p = r, n p)= t, and m n p = ξ. (i) ∨ ∨ ∨ ∨ ∨ Then for (216) we have µy =1; (220) ~ LHS = aA Is/m + bB Is/n ⋉C (ii) A i , where i satisfies ⊗ ⊗ ∈Mµ = aA I + bB I I C 1 ⊗ s/m ⊗ s/n ⊗ ξ/s ⊗ ξ/p iµx t = tµx. (221) = a A I C 1 + b B I C  1 ∨ ⊗ ξ/m ⊗ ξ/p ⊗ ξ/n ⊗ ξ/p Proof. (Necessity) Assume ξ = iµx t, by definition we = a A Is/m C 1s/p 1ξ/s   ∨ ⊗ ⊗ ⊗ have + b B It/n (C 1t/p)  1ξ/t A⋉~ X = A I X 1 ⊗ ⊗ ⊗ ⊗ ξ/iµx ⊗ ξ/t ~± = a A Is/m C 1s/p b B It/n (C 1t/p) , X .   ⊗ ⊗ ⊗ ⊗ ∈Vt ∈Vt = RHS.      Hence we have

✷ ξ iµ = t. (222) y iµ  x  The distributive law can be extended to equivalence It follows from (222) that spaces as follows.

ξ µx ~ n n = . (223) Corollary 234 ⋉ : Σ Σ Σ is distributive. t µy M × M → M β Precisely, Let A , B Σµ and C , D Σ . Then h i h i ∈ ⌈ ⌉ ⌈ ⌉ ∈ M Since µx and µy are co-prime and the left hand side of

± ± (223) is an integer, we have µy = 1. It follows that (a A b B )⋉~ C = a A ⋉~ C ~ b B ⋉~ C , h i h i ⌈ ⌉ h i ⌈ ⌉ h i ⌈ ⌉ a,b F. ξ = tµx. ∈ (218)

A ⋉~ (a C ~± b D ) = a A ⋉~ C ~± b A ⋉~ D , (Sufficiency) Assume (221) holds, then (222) holds. It h i ⌈ ⌉ ⌈ ⌉ h i ⌈ ⌉ h i ⌈ ⌉ follows that a,b F. ∈ (219) A⋉~ X , when X . ∈Vt ∈Vt

49 ✷ Then R6 is A-invariant space. For instance, let

T Assume A and S = , a natural question is: 6 ∈ Mµ Vt X = 1+ i, 2, 1 i, 0, 0, 0 C . (227) Can we find A such that S is A-invariant? According − ∈ h i to Proposition 236, we know that it is necessary that Then µ = 1. Let k , , k be the prime divisors of µ t, y 1 ··· ℓ x ∧ then we have A⋉~ X = iX. (228)

µ = kα1 kαℓ p; t = kβ1 kβℓ q, (224) x 1 ··· ℓ 1 ··· ℓ Motivated by (228), we give the following definition. where p, q are co-prime and k ∤ p, k ∤ q, i. i i ∀ Definition 240 Assume A and X . If Now it is obvious that to meet (221) it is necessary and ∈M ∈V sufficient that A⋉~ X = αX, α F,X =0, (229) ∈ 6

β1 βℓ i = k1 k λ, (225) then α is called an eigenvalue of A, and X is called an ··· ℓ eigenvector of A with respect to α. where λ (pq). | Example 241 Recall Example 239. In fact it is easy to Summarizing the above argument we have the following verify that matrix A in (226), as a linear mapping on R6, result. has 6 eigenvalues: Precisely,

Proposition 237 Assume A µ, S = t. S is A- σ(A)= i, i, 0, 0, 0, 1 . ∈ M V invariant, if and only if, (i) µ = 1, and (ii) A i , { − } y ∈ Mµ where i satisfies (225). Correspondingly, the first eigenvector is X, defined in (227). The other 5 eigenvectors are (1 i, 2, 1+i, 0, 0, 0)T , T T − Remark 238 Assume A 1, S = t. Using Propo- (1, 1, 1, 0, 0, 0) , (0, 0, 0, 0, 1, 0) (this is a root vector), ∈ M V sition 237, it is clear that S is A-invariant, if and only (0, 0, 0, 0, 0, 1)T , and (0, 0, 0, 1, 1, 1)T , respectively. if, A i with ∈M1 i ℓ ℓ t . Assume S = t is A-invariant with respect to ⋉~ , that is, ∈{ | } V Particularly, when i =1 then A = a is a number. So it is A⋉~ S S. (230) a scale product of S, i.e, for V S we have a⋉~ V = aV ⊂ ∈ ∈ S. When i = t then A t t. So we have A⋉~ V = AV ∈M × ∈ Then the restriction A S is a linear mapping on S. It S. This is the classical linear mapping on t. We call | V follows that there exists a matrix, denoted by A these two linear mapping the standard linear mapping. |t ∈ t t, such that A S is equivalent to A t. We state it as The following example shows that there are lots of non- M × | | a proposition. standard linear mappings.

Proposition 242 Assume A and S = t is A- Example 239 (1) Assume µ =0.5, t =6. Then µy = ∈ M V invariant. Then there exists a unique matrix A t t t, 1, µx =2. µx t =2. Using (224), µx =2 1 and | ∈M × ∧ × such that A S is equivalent to A t. Precisely, t =2 3. That is, p =1, q =3. According to (225) | | β×1 i =2 λ, where β1 =1, λ =1 or λ =3. According A⋉~ X = A tX, X S. (231) to Proposition 237, 6 is A-invariant, if and only | ∀ ∈ 2 V 6 if, A 0.5 or A 0.5. ∈M ∈M A t is called the realization of A on S = t. (2) Next, we give a numerical example: Assume | V Remark 243 (1) To calculate A from A is easy. In |t 1 100 fact, it is clear that A = − 2 . (226)   ∈M0.5 0 0 10 i Coli (A t)= A⋉~ δ , i =1, , t. (232)   | t ···

50 (2) Consider Example 239 again. where q is co-prime with i.

Col (A )= A⋉~ δ1 1 |6 6 1 100 Using Proposition 236 again, we have only to prove that − 1 = I3 δ6 12 there exists at least one t satisfying (221). Calculate that 0 0 10 ⊗  ⊗      = (1, 1, 0, 0, 0, 0)T .

Similarly, we can calculate all other columns. Fi- iµ t = n imax(rj +kj , tj )(p q); x ∨ j=1 j ∨ nally, we have n rj +tj µxt = Qj=1 ij pq. 1 1 0 000 Q − 1 0 1000 −   To meet (221) a necessary condition is: p and q are co- 0 1 1000   prime. Next, fix j, we consider two cases: (i) tj > kj +rj : A 6 =  −  | 00 0100 Then on the LHS (left hand side) of (221) we have factor   tj rj +tj   i and on the RHS of (221) we have factor i . Hence,   j j 00 0100 as long as r = 0, we can choose t > k to meet (221).   j j j   kj +rj 00 0010 (ii) tj < kj + rj : Then on the LHS we have factor ij   rj +tj   and on the RHS we have factor ij . Hence, as long as tj = kj , (221) is satisfied. ✷ Finally, given a matrix A i , we would like to know ∈Mµ whether it has (at least one) type-1 invariant subspace S = t? V Using above notations, we also have the following result Then as a corollary of Proposition 236, we can prove the following result.

i Corollary 245 Assume µy = 1. Then t is A- Corollary 244 Assume A µ, then A has (at least V ∈ M invariant, if and only if, (i) for tj =0, the corresponding one) type-1 invariant subspace S = t, if and only if, V r k ; (ii) for t > 0, the corresponding r = k . j ≥ j j j j µy =1. (233)

Proof. According to Proposition 236, µy = 1 is obvious Example 246 Recall Example 239 again. necessary. We prove it is also sufficient. Assume i is fac- torized into its prime factors as

n 2 (1) Since the matrix A defined in (226) is in 0.5. We i = ikj , (234) M j have i = 2, µy = 1, µx =2= ip and hence p = 1. j=1 Y According to Corollary 245, S = is A-invariant, Vt and correspondingly, µx is factorized as if and only if, t = iq = 2q and q is co-prime with i =2. Hence n rj µx = ij p, (235) j=1 Y , n =0, 1, 2, , where p is co-prime with i; t is factorized as V2(2n+1) ··· n tj t = ij q, (236) j=1 Y are type-1 invariant subspaces of A.

51 (2) Assume q =5. Then the restriction is We can also briefly call i the set of i -invariant sub- Iµ Mµ spaces. 1 0 1 0 0 00000 −   Before ending this subsection we consider a general type- 10 0 1 0 00000 1 invariant subspace S . The following proposition  −  ⊂ Vt   is obvious. 01 0 1 0 00000  −    010 0 100000  −  Proposition 248 If S t is an A-invariant subspace,   ⊂V 001 0 100000 then t is also an A-invariant subspace.   V A R10 = A 10 =  −  | |   000 0 010000   Because of Proposition 248 searching S becomes a clas-   000 0 010000 sical problem. Because we can first find a matrix P   ∈   t t, which is equivalent to A t . Then S must be a 000 0 001000 M × |V   classical invariant subspace of P .     000 0 001000     7.5 Type-2 Invariant Subspace 000 0 000100     The eigenvalues are Denote the set of Type-1 A-invariant subspaces as

σ(A R10 )= 1, i, i, 0, 1, 0, 0, 0, 0, 1 . := is A-invariant . | {− − } IA Vs Vs The corresponding eigenvectors are:  Assume A i . Then = i . ∈Mµ IA Iµ T E1 = (0, 1, 0, 1, 2, 0, 0, 0, 0, 0) To assure = , through this subsection we assume IA 6 ∅ E = (0.3162+ 0.1054i, 0.5270, 0.4216 0.2108i, µy = 1. We give a definition for this. 2 − 0.3162 0.4216i, 0.1054 0.3162i, 0, 0, 0, 0, 0)T − − Definition 249 (1) Assume A µ. A is said to be ∈ M E = (0.3162 0.1054i, 0.5270, 0.4216+ 0.2108i, a bounded operator (or briefly, A is bounded,) if 3 − µy =1. 0.3162+ 0.4216i, 0.1054+ 0.3162i, 0, 0, 0, 0, 0)T (2) A sequence i =1, 2, , is called an A gen- Vi ··· T erated sequence if E4 = (1, 1, 1, 1, 1, 0, 0, 0, 0, 0) 

T E5 = (2, 1, 0, 1, 0, 0, 0, 0, 0, 0) A⋉~ i i+1, i =1, 2, . T V ⊂V ··· R6 = (0, 0, 0, 0, 0, 0, 1, 0, 0, 0) (3) A finite sequence i =1, 2, ,p , is called an R = (0, 0, 0, 0, 0, 0, 1, 1, 0, 0)T Vi ··· 7 A generated loop if = . p 1 T  V V E8 = (0, 0, 0, 0, 0, 0, 0, 0, 0, 1)

T Lemma 250 Assume there is an A generated loop p, E9 = (0, 0, 0, 0, 0, 0, 0, 0, 1, 0) V q1 , , qr , p, as depicted in (237). T V ··· V V E10 = (0, 0, 0, 0, 0, 1, 1, 1, 1, 1) . A A A A p q1 qr p (237) Note that R6, R7 are two root vectors. That is, V −→V −→· · · −→V −→V Then A⋉~ R6 = R7; A⋉~ R7 = E8.

q = p, j =1, , r. (238) Remark 247 In fact, the set of type-1 A-invariant sub- j ··· spaces depends only on the shape of A. Hence we define

i := is A i invariant . Proof. Assume A i . According to Definition 224, Iµ Vt |Vt ∈Mµ ∈ Mµ 

52 we have the following dimension relationship: From above it is clear that

s := iµ p q = µs ; tr µx 0 x ∨ ⇒ 1 0 (242)

s := iµ q q = µs ; µx ξ =1. 1 x ∨ 1 ⇒ 2 1 ∧ . . (239) Next, using last two equation in (239), we have sr 1 := iµx qr 1 qr = µsr 1; − ∨ − ⇒ − µ (iµx µ lcm(iµx, qr 1)) = p = iξ. s := iµ q p = µs . ∨ − r x ∨ r ⇒ r Similar to the above argument, we have Next, we qr 1 ξµ µ µ(µ − ) . x x ∨ ∨ i set s0 := t0p then q1 = µt0p;   Hence 2 2 set s1 := t1q1 then q2 = µ t1t0p; µx ξµx µx µ(µ ξ) . . ∨ ∨ trtr 1 .  −  . To meet this requirement, it is necessary that

r set sr 1 := tr 1qr 1 then qr = µ tr 1 t1t0p; 2 − − − − ··· trtr 1 µx. − set s := t q then p = µr+1t t t p. r r r r 1 0 ··· Continuing this process, finally we have

We conclude that r s+1 trtr 1 ts µx− , s = r 1, r 2, , 0. (243) − ··· − − ··· r+1 µ trtr 1 t0 =1. (240) − ··· Combining (240) with (243) yields that

Equivalently, we have t = µ , s =0, 1, , r. s x ··· r+1 ✷ µx That is, q1 = q2 = = qr = p. r+1 = trtr 1 t0. ··· µy − ··· Theorem 251 A finite dimensional subspace S is ⊂ V It follows that A-invariant, if and only if, S has the following structure:

µy =1. S = ℓ Si, (244) ⊕i=1 Define where s = iµ q := iµ ξ, A⋉~ Si Si+1, i =1, ,ℓ 1. r x ∨ r x ⊂ ··· − where ξ N. Then from the last equation of (239) we ∈ Proof. Since S is of finite dimension, there are only finite have such that Vti p = iξ. (241) Sj := S = 0 . ∩Vtj 6 { } That is, Now for each 0 = X Sj we construct X := 6 0 ∈ ⊂ Vtj 1 µ(iµ q )= iξ. A⋉~ X Vt for certain tr. Note that S is A-invariant, x ∨ r ∈ r if for all X Sj, we have t = t , then this Sj = Sℓ Using (240), and the expression 0 ∈ r j is the end element in the sequence. Otherwise, we can r r find a successor S = S t . Note that since there are qr = µ tr 1 t1t0p, ∩V r − ··· only finite Sj, according to Lemma 250, starting from j we have X0 S there are only finite sequence of differen S till it ∈ ℓ µ ξ reach an A-invariant S (equivalently, A-invariant t ). µ x = ξµ . V ℓ x ∨ t x The claim follows. ✷  r 

53 7.6 Higher Order Linear Mapping Note that is A-invariant. Using Proposition 236, Vs ∈ IA (221) leads to Definition 252 Let A i , is A-invariant sub- ∈ Mµ Vt space of . That is, A : is a linear mapping. The max (kj + rj , max(kj + rj ,tj ) rj ) = max (kj + rj ,tj ) . V Vt →Vt − higher order linear mapping of A, is defined as (248)

[1] Case 1 : tj > kj +2rj , (248)leads to rj = 0. Hence, we have A ⋉~ X := A⋉~ X, X t ∈V (245) (A[k+1]⋉~ X := A⋉~ A[k]⋉~ X , k 1. ≥ rj = 0 and tj > kj . (249)  Definition 253 Let X . The A-sequence of X is the ∈V Case 2 : kj +rj tj kj +2rj , which leads to kj +rj = tj . sequence Xi , where ≤ ≤ { } Case 3 : tj < kj +rj , which assures (248). Combining Case 2 and Case 3 yields X0 = X,

(Xi+1 = A⋉~ Xi, i =0, 1, 2, . tj kj + rj . (250) ··· ≤ Note that when r = 0, if t k we have (250). Hence j j ≤ j Using notations (234)–(236), we have the following re- t k is also allowed. The conclusion follows. ✷ j ≤ j sult. The following result is important. i Lemma 254 Assume A µ is bounded. X t, 1 ∈ M ∈ V i where i, µ = , and t are described by (234)–(236). Theorem 255 Let A µ be bounded. Then for any µx ∈ M Then A⋉~ X , if and only if, for each 0

Proof. Assume X := A⋉~ X ˜1 . Using notations rj =0; (246) 1 ∈ Vt (234)–(236), it is easy to calculate that after one step or the j-th index of t becomes

1 0 t k + r . (247) tj = max kj + rj ,tj ) rj . j ≤ j j −  Assume rj = 0, this component already meets the re- Proof. Since quirement of Lemma 254. Assume for some j, rj > 0 0 and tj > kj + rj , then we have n max(kj +rj ,tj ) n tj iµx t = j=1 ij q j=1 ij q 1 0 0 ∨ ∨ tj = tj rj

k 0 s = (iµ t)µ tj = tj krj , (251) x ∨ − n max(kj +rj ,tj ) rj = i − (p q)/p. s k j=1 j ∨ will satisfy (247), and as long as (247) holds, tj = tj s > k. Hence, after finite steps either (246) or (247) Q ∀ Then we can calculate that (or both) is satisfied. Then at the next step the sequence n enters into t A. ✷ max(kj +rj ,max(kj +rj ,tj ) rj ) V ∈ I iµ s = i − (p q), x ∨ j ∨ j=1 Definition 256 Given a polynomial Y and n n 1 n p(x)= x + cn 1x − + + c1x + c0, (252) − ··· µ s = imax(kj +rj ,tj )(p q). x j ∨ j=1 a matrix A and a vector X . Y ∈M ∈V

54 (1) p(x) is called an A-annihilator of X, if Set X0 = X. It is easy to see that

± ~ [n] [n 1] X1 = A⋉X0 6 A. p(A)X := A ⋉~ X~ cn 1A − ⋉~ X ∈V ∈ I − (253) ± ± ± ~ ~ c1A⋉~ X~ c0 =0. Hence, we can find the annihilator of X in the space ··· of R6. Calculating (2) Assume q(x) is the A-annihilator of X with min- imum degree, then q(x) is called the minimum A- 1 1 2 annihilator of X. 1 1 2       0 1 1 Remark 257 Theorem 255 shows why A µ with       ∈ M X1 =   ; X2 =   ; X3 =   ; µy =1 is called a bounded operator. In fact, it is necessary       0 1 0 and sufficient for A to have a finite dimensional invariant             subspace of either type-1 or type-2. We also know that if A 0 0 0       has type-2 invariant subspace, it also has type-1 invariant       0 0 1       subspace. If µy =1, A is called an unbounded operator.       6 2 3 3 The following result is obvious: 1  3  2       1  1  2 Proposition 258 The minimum A-annihilator of X di-       X4 =   ; X5 =   ; X6 =   , vides any A-annihilator of X.       2  1 4   −          The following result is an immediate consequence of The- 1  1 2   −    orem 255.       1  0  2             Corollary 259 Assume A is bounded, then for any X ∈ it is easy to verify that X1, X2, X3, X4, X5 are there exists at least one A-annihilator of X. V linearly independent. Moreover,

Proof. According to Theorem 255, there is a finite k such X6 = X1 + X2 + X3 + X4 X5. [k] − that A X s with s being A-invariant. Now in s ∈ V V [k]V assume the minimum annihilator polynomial for A X The minimum A-annihilator of X = X0 follows as is q(x), then p(x)= xkq(x) is an A-annihilator of X. ✷ p(x)= x6 + x5 x4 x3 x2 x. − − − − Example 260 (1) Assume A 1 . Since µ =2 = ∈M2/3 y 6 1, we know any X does not have its A-annihilator. 7.7 Invariant Subspace on V-equivalence Space Now assume X , where k = 3sp, and 3, p 0 ∈ Vk are co-prime. Then it is easy to see that the A se- Recall that quence of X0 has the dimensions, dim(Xi) := di, s 1 2 s 2 which are: d = 2 3 − p, d = 2 3 − p n 1 × 2 × ··· Ω := / ; Ω := n/ . d = 2sp, d = 2s+1p, d = 2s+2p, . It can M M ↔ M M·× ↔ s s+1 s+2 ··· not reach a t A. We extend the vector product to the equivalence spaces. V ∈ I (2) Given Definition 261 Let A and B q. Then we ∈ M ∈ M·× define ⋉~ :Σ Ωq Ωq as 1 M × M → M 1011   A = ; X = 0 . A ⋉~ B := A⋉~ B . (254)   h i ⌈ ⌉ ⌈ ⌉ 0101       0     The following proposition shows that (254) is well de- We try to find the minimum A-annihilator of X. fined.

55 Proposition 262 (254) is independent of the choice of is a topological dynamics, which is called a generalized A and B. linear system.

Proof. Assume A1 A is irreducible and A1 m n; To prove this theorem we have to prove that the condi- ∈ h i ∈M × B1 B is also irreducible and B1 p q. Ai = tions (1)–(3) are satisfied. It is not difficult to see that ∈ ⌈ ⌉ ∈ M × A1 Ii and Bj = B1 1j . Set s = n p, t = ni pj, (2) and (3) are satisfied. So we need to prove (255). We ⊗ ⊗ ∨ ∨ and sξ = t. Then state it as the following lemma.

Ai⋉~ Bj Lemma 265 For any two matrices A, B and any ∈M vector X , it holds that = Ai It/ni Bj 1t/pj ∈V ⊗ ⊗ = A1 Ii It/ni B1 1t 1t/pj (A ⋉ B)⋉~ X = A⋉~ (B⋉~ X). (258) ⊗ ⊗ ⊗ ⊗ = A I B 1  1 ⊗⊗ t/n 1 ⊗ t/p ~  ~  r = (A1⋉B1) 1ξ (A1⋉B1). Proof. Assume A m n, B p q, X R . Then ⊗ ↔ ∈M × ∈M × ∈ ✷ (A ⋉ B) ⋉~ X Precisely speaking, because is not a vector space, V = (A It/n)(B It/p) ⋉~ X “invariant subspace” is not a rigorous subspace. But ⊗ ⊗ Ω := / is a vector space. It is easy to see that the =  (A It/n)(B It/p) I sp X 1s/r V V ↔ ⊗ ⊗ ⊗ qt ⊗ (259) results about , and n can be extended to Ω and n o V M·× V = A I sp B Is/q X 1s/r  Ω . For instance, A Σµ is a bounded operator on ⊗ nq ⊗ ⊗ M h i ∈   Ω if and only if, µy = 1. = A I sp B Iℓ/q X 1ℓ/r 1φ V ⊗ nq ⊗ ⊗ ⊗   = A I sp B⋉~ X 1φ ,  7.8 Generalized Linear System ⊗ nq ⊗      where Definition 263 ([30,35]) Let S be a semigroup, X a Hausdorff space. A mapping ϕ : S X X is called a t = n p × → ∨ topological dynamics, if s = qt r p ∨ (1) ℓ = q r ∨ s = ℓφ. ϕ(s , ϕ(s , x)) = ϕ(s s , x), s ,s S, x X. 1 2 1 2 1 2 ∈ ∈ (255) pℓ Note that B⋉~ X R q . By definition if ∈ (2) pℓ n sp ∨ q = (260) ϕ(e, x)= x, x X, (256) n nq ∈ where e S is the identity of S. and ∈ (3) For each s S, ϕs : X X is continuous. ∈ → n pℓ ∨ q = φ, (261) Theorem 264 Let S = and X = . Then ⋉~ : S pℓ M V × q X X is a topological dynamics. Precisely, let F = R, → and A , then then (259) becomes ∈M

x(t + 1) := A⋉~ x(t), A⋉~ B⋉~ X , (257) x(0) = x ,  0 ∈V and we are done.

56 It is clear that both (260) and (261) are equivalent to Hence, we have

pℓ sp pℓ A1 As B1 Bt n = . (262) n q = uα0δ1 δs ǫ1 ǫt ∨ q q ∨ ··· ··· max(Q1,R1) Q1 max(Qs,Rs)Qs C1 Ct d − ds − ǫ ǫ r βu ∨ 1 ··· 1 ··· t 00 Hence as long as (262) holds, we are done.  max(max(Q1,R1) Q1,A1)  = uα r βd − 0 00 1 ··· max(max(Qs,Rs) Qs,As) max(C1,B1) max(Ct,Bt) In the following we prove (262). ds − ǫ ǫ . 1 ··· t (263) Since t = n p, assume ∨ n p = u, Next, we calculate the right hand side of (262): ∧ qα = dA1+Q1 dAs+Qs ǫB1 ǫBt q α ; then 1 ··· s 1 ··· t 0 0

max(A1+Q1,R1) max(As+Qs,Rs) n = αu, p = βu, t = αβu, α β =1. s = (qα) r = d ds ∧ ∨ 1 ··· ǫmax(B1,C1) ǫmax(Bt,Ct)q α r . Next, 1 ··· t 0 0 00 qt Then we have s = r = qα r. p ∨ ∨ sp max(A1+Q1,R1) Q1 max(As+Qs,Rs) Qs − − Assume d , d , , d are the set of prime factors of q r. q = d1 ds 1 2 ··· s ∧ ··· (264) Then we can express q and r as follows: ǫmax(B1,C1) ǫmax(Bt,Ct)βuα r . 1 ··· t 0 00

q = dQ1 dQ2 dQs q , Comparing (263) with (264),one sees easilythat to prove 1 2 ··· s 0 (262) it is enough to prove r = dR1 dR2 dRs r , 1 2 ··· s 0 max(Ai + Qi, Ri) Qi = max(max(Qi, Ri) Qi, Ai), where q0 r0 = 1, q0 di = 1, and r0 di = 1, Qi > 0, − − ∧ ∧ ∧ Ri > 0, i =1, ,s. i =1, , s. ··· ··· (265) Assume ǫ , ,ǫ are the common prime factors of α 1 ··· t and r, which are distinct from d , , d . Then we can Note that the left hand side of (265) equals to { 1 ··· s} express α and r respectively as follows: max(A , R Q ). i i − i A1 A2 As B1 B2 Bt α = d1 d2 ds ǫ1 ǫ2 ǫt α0, ··· ··· Then it is easy to verify that when Ri Qi both sides R R C C C ≥ r = d 1 d 2 dRs ǫ 1 ǫ 2 ǫ t r , of (265) equal to max(Ai, Ri Qi), and when Ri 0, Ci > 0, i =1, ,t, Aj 0, j =1, ,s. ··· ≥ ··· Remark 266 (1) The invariance subspace has been Aj might be zero, if α has no such a factor. Now α0 r00 = ∧ discussed in previous subsections. Then it is clear 1, α0 dj = 1, r00 dj = 1, j = 1, ,s, α0 ǫi = 1, ∧ ∧ ··· ∧ that the general linear system (257) is dimension r00 ǫi = 1, i =1, ,t. ∧ ··· bounded, which means there exists an n such that dim(x(t)

ℓ max(Q1,R1) Q1 max(Qs,Rs) Qs C1 Ct 8 Concluding Remarks = d − d − ǫ ǫ r ; q 1 ··· s 1 ··· t 00 Matrix theory is one of the most fundamental and useful pℓ max(Q1,R1) Q1 max(Qs,Rs) Qs C1 Ct = d − d − ǫ ǫ r βu. q 1 ··· s 1 ··· t 00 tools in modern science and technology. But one of the

57 major weaknesses is its dimension restriction. To over- (1) Equivalence 1: come this barrier,the purpose of this paper is to set up a Definition 267 Let A, B . A is said to be ∈ M framework for an almost dimension-free matrix theory. equivalent to B, denoted by A B, if there exist I , ≃ i Ij , 1s, 1t such that First, we review the STP (⋉), which extends the con- ventional matrix product to overall matrices . The re- 1T A I = 1T B I . (266) M α i β j lated monoid structure for ( , ⋉) is obtained. The M- ⊗ ⊗ ⊗ ⊗ M equivalence is proposed. A lattice structure over each It is easy to verify that is an equivalence rela- ∼ ≃ equivalence class is obtained. The equivalence space, as tion. Moreover, similar to M-equivalence or vector the quotient space / is introduced and discussed. equivalence, we have the following result: M ∼ Theorem 268 Assume A B, then there exists a ≃ Second, the set of overall matrices is partitioned into sub- Λ, such that ± spaces as = µ Q µ. The STA ( ) is proposed. M ∈ + M Under this addition the quotient spaces Σµ = µ/ T S M ∼ A = 1p Λ Is, become vector spaces. Certain geometric and algebraic ⊗ ⊗ (267) B = 1T Λ I . structures are revealed. Including topological structure, q ⊗ ⊗ t inner product structure, differential manifold structure, etc. Hence the lattice structure similar to M-equivalence exists. Particularly, when µ = 1, (corresponding to square ma- It may be considered as a combination of M- and trices) we have extended the Lie algebra and Lie group V-equivalences. Unfortunately, (i) it is not consis- theory to . A fiber bundle structure, called the dis- tent with STP (⋉); (ii) the quotient space is not a M1 crete bundle, is proposed for and the extended Lie vector space. Mµ group and Lie algebra. (2) Equivalence 2: Definition 269 Let A, B . A is said to be Finally, the set of overall vectors are considered as ∈ M V equivalent to B, if there exist 1i, 1j , 1s, 1t such that a universal vector space, based on the vector equiva- lence . A matrix A of any dimension can be con- 1T A 1 = 1T B 1 . (268) ↔ α ⊗ ⊗ i β ⊗ ⊗ j sidered either a linear mapping on , or a subspace V generated by its columns. The A-invariant subspace is The lattice structure can also be determined in a discussed in details. Many key concepts such as eigen- similar way. Moreover, the quotient space is a vector value/eigenvector, characteristic polynomial of a matrix space. Unfortunately, a proper product, which is have been extended from square matrices to no-square consistent with the equivalence, is unknown. matrices. Further geometric/algebraic structures may be investi- It was said by Asimov that “Only in mathematics is gated. there no significant correction - only existence. Once the Greeks had developed the deductive method, they were (1) More geometric structure on equivalence space correct in what they did, correct for all the time.” [43] could be interesting. For instance, a Riemannian All extensions we did in this paper consist with the clas- geometric structure or a Symplectic geometric sical ones. That is, when the dimension restrictions re- structure may be posed on the equivalence space. (2) Under the STP and ± ( ), for any A and quired by the classical matrix theory are satisfied the ⊢ ∈ Mµ B , new operators proposed in this paper coincide with the ∈Mλ classical ones. [A, B]= A ⋉ B B ⋉ A (269) ⊢ There are many questions remain for further discussion. For instance, is it possible to construct an equivalence is well defined. Moreover, the three requirements over , which is consistent with certain matrix product, (146)–(148) in Definition 146 can also be satisfied M such that the quotient space becomes a vector space? (under obvious modification). Exploring the prop- erties of this generalized Lie algebra is challenging The followings are some possible equivalences on . and interesting. M

58 One may be more interested in its applications. For in- [6] D. Cheng, H. Qi, Z. Li, Analysis and Control of Boolean stance, can we use the extended structure proposed in Networks: A Semi-tensor Product Approach, Springer, this paper to the analysis and control of certain dynamic London, 2011. systems? Particularly, we may consider the following [7] D. Cheng, H. Qi, Y. Zhao, An Introduction to Semi-tensor special cases: Product of Matrices and Its Applications, World Scientific, Singapore, 2012. (1) Consider a dynamic system [8] D. Cheng, J. Feng, H. Lv, Solving fuzzy relational equations via semi-tensor product, IEEE Trans. Fuzzy Systems, Vol. x˙ = A(t)x, x Rn, 20, No. 2, 390-396, 2012. ∈ [9] D. Cheng, X. Xu, Bi-decomposition of multi-valued logical where A(t) satisfies functions and its applications, Automatica, Vol. 49, No. 7, 1979-1985, 2013. ˙ At = V (x), [10] D. Cheng, On finite potential games, Automatica, Vol. 50, No. 7, 1793-1801, 2014. where V (x) is a vector field on n n. What can M × [11] D. Cheng, F. He, H. Qi, T. Xu, Modeling, analysis and control we say about this system? Is it possible to extend of networked evolutionary games. IEEE Trans. Aut. Contr., this system to the equivalence space Σ? Vol. 60, No. 9, 2402-2415, 2015. (2) A dimension-varying dynamic control system as [12] D. Cheng, Structure of matrices under equivalece, at http://arxiv.org/abs/1605.09523. x(t +1)= A⋉~ x(t)~± B⋉~ u(t) [13] M.L. Curtis, Matrix Groups, 2nd Ed., Springer-Verlag, New (y(t)= C⋉~ x(t), York, 1984. [14] J. Dieudonne, Foundation of Modern Analysis, Academic where x(t) . ∈V Press, New York, 1969. What can we say about this, say, controllability? [15] J. Dugundji, Topology, Allyn and Bacon, Bostion, 1966. observability etc.? [16] J. Fang, An Introduction to Lattice, Higher Education Press, In one word, this paper could be the beginning of inves- Beijing, 2014 (in Chinese). tigating dimension-free matrix theory and its applica- [17] J. Feng, H. Lv, D. Cheng, Multiple fuzzy relation and its tions. application to coupled fuzzy control, Asian J. Contr., Vol. 15, No. 5, 1313-1324, 2013. Acknowledgment [18] E. Fornasini, M.E. Valcher, Observability, reconstructibility and state observers of Boolean control networks, IEEE Trans. The author would like to thank the anonymous reviewers Aut. Contr., Vol. 58, No. 6, 1390-1401, 2013. for their valuable suggestions, comments, and detailed [19] J.B. Fountain, Abundant , Proc. London Math. typo corrections. Soc., Vol. 44, 103-129, 1982. [20] B. Gao, L. Li, H. Peng, et al, Principle for performing References attractor transits with single control in Boolean networks, Physical Review E, Vol. 88, No. 6, 062706, 2013. [1] W.M. Boothby, An Introduction to Differentiable Manifolds [21] P. Guo, Y. Wang, H. Li, Algebraic formulation and strategy and Riemannian Geometry, Academic Press, New York, optimization for a class of evolutionary networked games via 1979. semi-tensor product method, Automatica, Vol. 49, No. 11, [2] S. Burris, H.H. Sankappanavar, A Course in Universal 3384-3389, 2013. Algebra, Springer-verlag, New York, 1981. [22] B.C. Hall, Lie Groups, Lie Algebras, and Representations, An [3] D. Cheng, Semi-tensor product of matrices and its Elementary Introduction, Springer-Verlag, New York, 2003. application to Morgan’s problem, Science in China, Series F: Information Science, Vol. 44, No. 3, 195-212, 2001. [23] G. Hochma, M. Margaliot, E. Fornasini, Symbolic dynamics of Boolean control networks, Automatica, Vol. 49, No. 8, [4] D. Cheng, J. Ma, Q. Lu, S. Mei, Quadratic form of stable 2525-2530, 2013. sub-manifold for power systems, Int. J. Rob. Nonlin. Contr., Vol. 14, 773-788, 2004. [24] R.A. Horn, C.R. Johnson, Topics in , Cambridge Univ. Press, Cambridge, 1985. [5] D. Cheng, H. Qi, Semi-tensor Product of Matrices — Theory and Applications, Science Press, Beijing, 2007 (Second Ed., [25] J.M. Howie, Fundamentals of Semigroup Theory, Clar. Press, 2011). (in Chinese) Oxford, 1995.

59 [26] J.E. Humphreys, Introduction to Lie Algebras and [47] Y. Wang, T. Liu, D. Cheng, Some notes on semi-tensor Representation Theory, second printing, revised, Springer- product of matrices and swap matrix, J. Sys. Sci. & Math. Verlag, New York, 1972. Scis., under press. (in Chinese) [27] D. Husemoller, Fiber Bundles, 3rd Ed., Springer, New York, [48] S. Willard, General Topology, Addison-Wesley Pub., New 1994. York, 1970. [28] N. Jacobson, Basic Algebra I, second Ed., Freeman Comp., [49] Y. Wu, T. Shen, An algebraic expression of finite horizon New York, 1985. optimal control algorithm for stochastic logical dynamic [29] J.L. Kelley, General Topology, Springer-Verlag, New York, systems, Sys. Contr. Lett., Vol. 82, 108-144, 2015. 1975. [50] X. Xu, Y. Hong, Matrix expression to model matching of [30] S. Koppelberg, Ultrafilters, Semigroups, and Topology, asynchronous sequential machines, IEEE Trans. Aut. Contr., Lecture Notes, Freie University Berlia, Chapter 9, 1975. Vol. 58, No. 11, 2974-2979, 2013. [31] S. Lang, Algebra, Springer-Verlag, New York, 2002. [51] A. Xue, F. Wu, Q. Lu, S. Mei, Power system dynamic security IEEE Trans. Circ. Sys. I [32] D. Laschov, M. Margaliot, Minimum-time control of Boolean region and its approximations, , Vol. networks, SIAM J. Contr. Opt., 51,4, 2869-2892, 2013. 53, No. 12, 2849-2859, 2006. [33] R. Li, T. Chu, Synchronization in an array of coupled Boolean [52] Y. Yan, Z. Chen, Z. Liu, Semi-tensor product approach to networks, Physics Letters A, Vol. 376, No. 45, 3071-3075, controllability and stabilizability of finite automata, J. Syst. 2012. Engn. Electron., Vol. 26, No. 1, 134-141, 2015. [34] H. Li, Y. Wang, Boolean derivative calculation with [53] X. Zhang, Matrix Analysis and Applications, Tsinghua Univ. application to fault detection of combinational cirts via the Press, Beijing, 2004. (in Chinese) semi-tensor product method, Automatica, Vol. 48, No. 4, 688- [54] Y. Zhao, J. Kim, M. Filippone, Aggregation algorithm 693, 2012. towards large-scale Boolean netwok analysis, IEEE Trans. [35] Z. Liu, F. Qiao, S-system of Semigroup, 2nd Ed., Science Aut. Contr., Vol. 58, No. 8, 1976-1985, 2013. Press, Beijing 2008, (in Chinese). [55] L. Zhan, J. Feng, Mix-valued logic-based formation control, [36] X. Liu, Y. Xu, An inquiry method of transit network based Int. J. Contr., Vol. 86, No. 6, 1191-1199, 2013. on semi-tensor product, Complex Systems and Complexity [56] D. Zhao, H. Peng, L. Li, et al. Novel way to research nonlinear Science, Vol. 10, No. 1, 38-44, 2013. feedback shift register, Science China F, Information [37] X. Liu, J. Zhu, On potential equations of finite games, Sciences, Vol. 57, No. 9, 1-14, 2014. Automatica, Vol. 68, 245-253, 2016. [57] K. Zhang, L. Zhang, L. Xie, Invertibility and nonsingularity [38] J. Lu, J. Zhong, D.W.C. Ho, Y. Tang, J. Cao, On of Boolean control networks, Automatica, Vol. 60, 155-164, controlllability of delayed Boolean control networks, SIAM 2015. J. Cont. Opt., Vol. 54, No. 2, 475-494, 2016. [58] J. Zhong, D. Lin, A new linearization method for nonlinear [39] J. Lu, H. Li, Y. Liu, F. Li, A survey on semi-tensor product feedback shift registers, Journal of Computer and System method with its applications in logical networks and other Sciences, Vol. 81, 783-796, 2015. finite-valued systems, IET Contr. Theory Appl., Vol. 11, No. [59] J. Zhong, J. Lu, L. Li, J. Cao, Finding graph minimum 13, 2040-2047, 2017. stable set and core via semi-tensor product approach, [40] S. Mei, F. Liu, A. Xie, Transient Analysis of Power Systems Neurocomputing, Vol. 174, 588-596, 2016. — A Semi-tensor Product Approach, Tsinghua Univ. Press, [60] Y. Zou, J. Zhu, Kalman decomposition for Boolean control Beijing, 2010. (in Chinese) networks, Automatica, Vol. 54, 64-71, 2015. [41] L. Rade, B. Westergren, Mathematics Handbook for Science and Engineering, Studentlitteratur, Lund, 1989. [42] M. Spivak, A Comprehensive Introduction to Differential Geometry, Publish or Perish Inc., Berkeley, 1979. [43] T. Tang, J. Ding, Mahematical Writing in English, Higher Education Press, Beijing, 2013. [44] A.E. Taylar, D.C. Lay, Introduction to , 2nd Ed., John Wiley & Sons, New York, 1980. [45] Y. Wang, C. Zhang, Z. Liu, A matrix approach to graph maximum stable set and coloring problems with application to multi-agent systems, Automatica, Vol. 48, No. 7, 1227- 1236, 2012. [46] Z. Wan, Lie Algebra, 2nd ed., Higner Education Press, Beijing, 2013. (in Chinese)

60