Hindawi Publishing Corporation Journal of Applied Volume 2013, Article ID 296185, 8 pages http://dx.doi.org/10.1155/2013/296185

Research Article On the Kronecker Products and Their Applications

Huamin Zhang1,2 and Feng Ding1

1 Key Laboratory of Advanced Process Control for Light Industry of Ministry of Education, Jiangnan University, Wuxi 214122, China 2 Department of Mathematics and Physics, Bengbu College, Bengbu 233030, China

Correspondence should be addressed to Feng Ding; [email protected]

Received 10 March 2013; Accepted 6 June 2013

Academic Editor: Song Cen

Copyright © 2013 H. Zhang and F. Ding. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

This paper studies the properties of the Kronecker product related to the mixed products, the vector operator, and the vec- and gives several theorems and their proofs. In addition, we establish the relations between the singular values of two matrices and their Kronecker product and the relations between the , the trace, the , and the matrix of the Kronecker products.

1. Introduction numerical algorithms based on the hierarchical identification principle for the generalized Sylvester matrix equations [23– The Kronecker product, named after German mathematician 25] and coupled matrix equations [10, 26]wereproposedby Leopold Kronecker (December 7,1823–December 29, 1891), is Ding and Chen. On the other hand, the iterative algorithms very important in the areas of linear algebra and signal pro- for the extended Sylvester-conjugate matrix equations were cessing. In fact, the Kronecker product should be called the discussed in [27–29]. Other related work is included in [30– Zehfuss product because Johann Georg Zehfuss published a 32]. paper in 1858 which contained the well-known determinant 𝑛 𝑚 This paper establishes a new result about the singular conclusion |A ⊗ B|=|A| |B| ,forsquarematricesA and B value of the Kronecker product and gives a definition of the with order 𝑚 and 𝑛 [1]. The Kronecker product has wide applications in system vec-permutation matrix. In addition, we prove the mixed theory [2–5], matrix calculus [6–9], matrix equations [10, 11], products theorem and the conclusions on the vector operator system identification [12–15], and other special fields [16– in a different method. 19]. Steeba and Wilhelm extended the exponential functions Thispaperisorganizedasfollows.Section 2 gives the def- formulas and the trace formulas of the exponential functions inition of the Kronecker product. Section 3 lists some prop- of the Kronecker products [20]. For estimating the upper and erties based on the the mixed products theorem. Section 4 lower dimensions of the ranges of the two well-known linear presents some interesting results about the vector operator transformations T1(X)=X − AXB and T2(X)=AX − and the vec-permutation matrices. Section 5 discusses the XB, Chuai and Tian established some rank equalities and determinant, trace, and rank properties and the properties of inequalities for the Kronecker products [21]. Corresponding polynomial matrices. to two different kinds of matrix partition, Koning, Neudecker, and Wansbeek developed two generalizations of the Kro- necker product and two related generalizations of the vector 2. The Definition and the Basic Properties of operator [22]. The Kronecker product has an important role the Kronecker Product in the linear matrix equation theory. The solution of the Sylvester and the Sylvester-like equations is a hotspot research Let F be a , such as R or C.ForanymatricesA = 𝑚×𝑛 𝑝×𝑞 area. Recently, the innovational and computationally efficient [𝑎𝑖𝑗 ]∈F and B ∈ F ,theirKroneckerproduct 2 Journal of Applied Mathematics

(i.e., the direct product or tensor product), denoted as A ⊗ B, Proof. According to the definition of the Kronecker product is defined by and the , we have

𝑎11B 𝑎12B ⋅⋅⋅ 𝑎1𝑛B [𝑎 B 𝑎 B ⋅⋅⋅ 𝑎 B] A ⊗ B =[𝑎𝑖𝑗 B] [ 21 22 2𝑛 ] A ⊗ B = [ ] [ . . . ] . . . 𝑎11B 𝑎12B ⋅⋅⋅ 𝑎1𝑛B [ ] [𝑎𝑚1B 𝑎𝑚2B ⋅⋅⋅ 𝑎𝑚𝑛B] [𝑎21B 𝑎22B ⋅⋅⋅ 𝑎2𝑛B] = [ ] ∈ F (𝑚𝑝)×(𝑛𝑞). [ . . . ] 𝑎 I 𝑎 I ⋅⋅⋅ 𝑎 I . . . 11 𝑝 12 𝑝 1𝑛 𝑝 B0⋅⋅⋅ 0 [𝑎 I 𝑎 I ⋅⋅⋅ 𝑎 I ] [0B⋅⋅⋅ 0] [𝑎𝑚1B 𝑎𝑚2B ⋅⋅⋅ 𝑎𝑚𝑛B] [ 21 𝑝 22 𝑝 2𝑛 𝑝 ] [ ] = [ ] [ ] (1) [ . . . ] [ . . . ] . . . . . d . [𝑎𝑚1I𝑝 𝑎𝑚2I𝑝 ⋅⋅⋅ 𝑎𝑚𝑛I𝑝] [00⋅⋅⋅ B] It is clear that the Kronecker product of two diagonal matrices is a and the Kronecker product of =(A ⊗ I𝑝)(I𝑛 ⊗ B), two upper (lower) triangular matrices is an upper (lower) 𝑇 𝐻 A A 𝑎11B 𝑎12B ⋅⋅⋅ 𝑎1𝑛B . Let and denote the transpose and [ ] A I [𝑎21B 𝑎22B ⋅⋅⋅ 𝑎2𝑛B] the Hermitian transpose of matrix ,respectively. 𝑚 is A ⊗ B = [ ] 𝑚×𝑚 [ . . . ] an with order .Thefollowingbasic . . . properties are obvious. [𝑎𝑚1B 𝑎𝑚2B ⋅⋅⋅ 𝑎𝑚𝑛B] Basic properties as follows: B0⋅⋅⋅ 0 𝑎11I𝑞 𝑎12I𝑞 ⋅⋅⋅ 𝑎1𝑛I𝑞 [ ] [𝑎 I 𝑎 I ⋅⋅⋅ 𝑎 I ] I ⊗ A = [A, A,...,A] [0B⋅⋅⋅ 0] [ 21 𝑞 22 𝑞 2𝑛 𝑞 ] (1) 𝑚 diag , = [ ] [ ] [ . . . ] [ . . . ] . . d . . . . 𝑇 𝑇 00⋅⋅⋅ B 𝑎 I 𝑎 I ⋅⋅⋅ 𝑎 I (2) if 𝛼 =[𝑎1,𝑎2,...,𝑎𝑚] and 𝛽 =[𝑏1,𝑏2,...,𝑏𝑛] ,then, [ ] [ 𝑚1 𝑞 𝑚2 𝑞 𝑚𝑛 𝑞] 𝑇 𝑇 𝑇 𝑚×𝑛 𝛼𝛽 = 𝛼 ⊗ 𝛽 = 𝛽 ⊗ 𝛼 ∈ F , =(I𝑚 ⊗ B)(A ⊗ I𝑞). A =[A ] B (3) if 𝑖𝑗 is a , then for any matrix , (3) A ⊗ B =[A𝑖𝑗 ⊗ B].

(4) (𝜇A)⊗B = A ⊗(𝜇B)=𝜇(A ⊗ B), From Theorem 1,wehavethefollowingcorollary. 𝑚×𝑚 𝑛×𝑛 Corollary 2. Let A ∈ F and B ∈ F .Then (5) (A + B)⊗C = A ⊗ C + B ⊗ C, A ⊗ B = (A ⊗ I𝑛)(I𝑚 ⊗ B) = (I𝑚 ⊗ B)(A ⊗ I𝑛) . (4) (6) A ⊗(B + C)=A ⊗ B + A ⊗ C, This mean that I𝑚 ⊗ B and A ⊗ I𝑛 are commutative for square matrices A and B. (7) A ⊗(B ⊗ C)=(A ⊗ B)⊗C = A ⊗ B ⊗ C, Using Theorem 1, we can prove the following mixed 𝑇 𝑇 𝑇 (8) (A ⊗ B) = A ⊗ B , products theorem. Theorem 3. A =[𝑎 ]∈F 𝑚×𝑛 C =[𝑐]∈F 𝑛×𝑝 B ∈ F 𝑞×𝑟 (A ⊗ B)𝐻 = A𝐻 ⊗ B𝐻 Let 𝑖𝑗 , 𝑖𝑗 , , (9) . 𝑟×𝑠 and D ∈ F .Then

𝑇 Property 2 indicates that 𝛼 and 𝛽 are commutative. (A ⊗ B)(C ⊗ D) = (AC) ⊗ (BD) . (5) Property 7 shows that A ⊗ B ⊗ C is unambiguous. Proof. According to Theorem 1,wehave 3. The Properties of the Mixed Products (A ⊗ B)(C ⊗ D)

This section discusses the properties based on the mixed =(A ⊗ I𝑞)(I𝑛 ⊗ B)(C ⊗ I𝑟)(I𝑝 ⊗ D) products theorem [6, 33, 34]. =(A ⊗ I )[(I ⊗ B)(C ⊗ I )] (I ⊗ D) 𝑚×𝑛 𝑝×𝑞 𝑞 𝑛 𝑟 𝑝 Theorem 1. Let A ∈ F and B ∈ F ;then =(A ⊗ I𝑞) (C ⊗ B) (I𝑝 ⊗ D)

A ⊗ B =(A ⊗ I𝑝)(I𝑛 ⊗ B)=(I𝑚 ⊗ B)(A ⊗ I𝑞). (2) =(A ⊗ I𝑞)[(C ⊗ I𝑞)(I𝑝 ⊗ B)] (I𝑝 ⊗ D) Journal of Applied Mathematics 3

=[(A ⊗ I𝑞)(C ⊗ I𝑞)] [(I𝑝 ⊗ B)(I𝑝 ⊗ D)] where Σ = diag[𝜎1,𝜎2,...,𝜎𝑟] and Γ = diag[𝜌1,𝜌2,...,𝜌𝑠]. According to Corollary 4,wehave =[(AC) ⊗ I𝑞)] [I𝑝 ⊗ (BD)] Σ 0 Γ 0 A ⊗ B ={U [ ] V}⊗{W [ ] Q} = (AC) ⊗ (BD) . 00 00 (6) Σ 0 Γ 0 [1] = (U ⊗ W) {[ ]⊗[ ]} (V ⊗ Q) Let A := A and define the Kronecker power by 00 00 [𝑘+1] [𝑘] [𝑘] A := A ⊗ A = A ⊗ A , 𝑘=1,2,.... (7) Σ 0 (9) [Σ ⊗[ ] 0] From Theorem 3,wehavethefollowingcorollary[7]. = (U ⊗ W) 00 (V ⊗ Q) [ 00] Corollary 4. If the following matrix products exist, then one Σ ⊗Γ 0 has = (U ⊗ W) [ ] (V ⊗ Q) . 00 (1) (A1 ⊗ B1)(A2 ⊗ B2)⋅⋅⋅(A𝑝 ⊗ B𝑝)=(A1A2 ⋅⋅⋅A𝑝)⊗ (B1B2 ⋅⋅⋅B𝑝), Since U ⊗ W and V ⊗ Q are unitary matrices and Σ ⊗ Γ = (A ⊗ A ⊗⋅⋅⋅⊗A )(B ⊗ B ⊗⋅⋅⋅⊗B )=(A B )⊗ (2) 1 2 𝑝 1 2 𝑝 1 1 diag[𝜎1𝜌1,𝜎1𝜌2,...,𝜎1𝜌𝑠,...,𝜎𝑟𝜌𝑠], this proves the theorem. (A2B2)⊗⋅⋅⋅⊗(A𝑝B𝑝), [𝑘] [𝑘] [𝑘] (3) [AB] = A B . According to Theorem 7,wehavethenextcorollary. AsquarematrixA is said to be a if and only 𝐻 𝐻 if A A = AA .AsquarematrixA is said to be a unitary Corollary 8. For any matrices A, B,andC,onehas𝜎[A ⊗ B ⊗ 𝐻 𝐻 matrix if and only if A A = AA = I. Straightforward C]=𝜎[C ⊗ B ⊗ A]. calculationgivesthefollowingconclusions[6, 7, 33, 34].

Theorem 5. For any square matrices A and B, 4. The Properties of the Vector Operator and −1 −1 −1 −1 −1 the Vec-Permutation Matrix (1) if A and B exist, then (A ⊗ B) = A ⊗ B , (2) if A and B are normal matrices, then A ⊗ B is a normal In this section, we introduce a vector-valued operator and a matrix, vec-permutation matrix. 𝑚×𝑛 𝑚 (3) if A and B are unitary (orthogonal) matrices, then A ⊗ Let A =[a1, a2,...,a𝑛]∈F ,wherea𝑗 ∈ F ,𝑗 = B is a unitary (orthogonal) matrix, 1,2,...,𝑛; then the vector col[A] is defined by 𝜆[A]:={𝜆,𝜆 ,...,𝜆 } A Let 1 2 𝑚 denote the eigenvalues of a1 and let 𝜎[A]:={𝜎1,𝜎2,...,𝜎𝑟} denote the nonzero singular [a ] [ 2] 𝑚𝑛 values of A. According to the definition of the eigenvalue and [A] := [ ] ∈ F . col [ . ] (10) Theorem 3, we have the following conclusions [34]. . [a𝑛] 𝑚×𝑚 𝑛×𝑛 Theorem 6. Let A ∈ F and B ∈ F ; 𝑘 and 𝑙 are positive 𝑘 𝑙 𝑘 𝑙 𝑚×𝑛 𝑛×𝑝 𝑝×𝑛 integers. Then 𝜆[A ⊗ B ]={𝜆𝑖 𝜇𝑗 | 𝑖 = 1,2,...,𝑚, 𝑗= Theorem 9. Let A ∈ F , B ∈ F ,andC ∈ F ,Then 𝑙 𝑘 1,2,...,𝑛}= 𝜆[B ⊗ A ]. Here, 𝜆[A]={𝜆1,𝜆2,...,𝜆𝑚} and (1) (I𝑝 ⊗ A)col[B]=col[AB], 𝜆[B]={𝜇1,𝜇2,...,𝜇𝑛}. (A ⊗ I ) [C]= [CAT] According to the definition of the singular value and (2) 𝑝 col col . Theorem 3,foranymatricesA and B, we have the next (B) 𝑖 B theorem. Proof. Let 𝑖 denote the th column of matrix ;wehave

𝑚×𝑛 𝑝×𝑞 Theorem 7. Let A ∈ F and B ∈ F .Ifrank[A]=𝑟, A0⋅⋅⋅ 0 (B)1 𝜎[A]={𝜎,𝜎 ,...,𝜎 } [B]=𝑠 𝜎[B]= [ ] [ B ] 1 2 𝑟 , rank ,and [0A⋅⋅⋅ 0] [( )2] {𝜌1,𝜌2,...,𝜌𝑠},then𝜎[A ⊗ B]={𝜎𝑖𝜌𝑗 | 𝑖 = 1,2,...,𝑟, 𝑗= (I𝑝 ⊗ A) col [B] = [ . . . ] [ . ] [ . . d . ] [ . ] 1,2,...,𝑠}=𝜎[B ⊗ A]. . . . . [00⋅⋅⋅ A] [(B)𝑝] Proof. According to the singular value decomposition theo- (11) rem, there exist the unitary matrices U, V and W, Q which A(B)1 (AB)1 [A B ] [ AB ] satisfy [ ( )2] [( )2] = [ . ] = [ . ] = col [AB] . Σ 0 Γ 0 [ . ] [ . ] A = U [ ] V, B =[ ] Q, (8) 00 00 [A(B)𝑝] [(AB)𝑝] 4 Journal of Applied Mathematics

Similarly, we have These two definitions of the vec-permutation matrix are equivalent; that is, (A ⊗ I𝑝) col [C]

𝑎 I 𝑎 I ⋅⋅⋅ 𝑎 I 𝑚 𝑛 11 𝑝 12 𝑝 1𝑛 𝑝 (C)1 𝑇 [𝑎 I 𝑎 I ⋅⋅⋅ 𝑎 I ] [ ] ∑ ∑ (e𝑘𝑛 ⊗ e𝑗𝑚)(e𝑗𝑚 ⊗ e𝑘𝑛) = P𝑚𝑛. [ 21 𝑝 22 𝑝 2𝑛 𝑝 ] [(C)2] (18) = [ ] [ ] 𝑗=1 𝑘=1 [ . . . ] [ . ] . . . . [𝑎𝑚1I𝑝 𝑎𝑚2I𝑝 ⋅⋅⋅ 𝑎𝑚𝑛I𝑝] [(C)𝑛] In fact, according to Theorem 3 and the basic properties of 𝑎11(C)1 +𝑎12(C)2 +⋅⋅⋅+𝑎1𝑛(C)𝑛 the Kronecker product, we have [ ] [ 𝑎21(C)1 +𝑎22(C)2 +⋅⋅⋅+𝑎2𝑛(C)𝑛 ] = [ . ] [ . ] 𝑚 𝑛 . 𝑇 ∑ ∑ (e ⊗ e )(e ⊗ e ) [𝑎𝑚1(C)1 +𝑎𝑚2(C)2 +⋅⋅⋅+𝑎𝑚𝑛(C)𝑛] 𝑘𝑛 𝑗𝑚 𝑗𝑚 𝑘𝑛 𝑗=1 𝑘=1 C(A𝑇) (CA𝑇) 1 1 𝑚 𝑛 [ 𝑇 ] [ 𝑇 ] [C(A ) ] [(CA ) ] = ∑ ∑ (e ⊗ e )(e𝑇 ⊗ e𝑇 ) = [ 2 ] = [ 2 ] = [CA𝑇]. 𝑘𝑛 𝑗𝑚 𝑗𝑚 𝑘𝑛 [ . ] [ . ] col 𝑗=1 𝑘=1 [ . ] [ . ] 𝑇 𝑇 [C(A ) ] [(CA ) ] 𝑚 𝑛 𝑚 𝑚 = ∑ ∑ (e e𝑇 )⊗(e e𝑇 ) (12) 𝑘𝑛 𝑗𝑚 𝑗𝑚 𝑘𝑛 𝑗=1 𝑘=1

𝑚×𝑛 𝑛×𝑝 𝑝×𝑞 𝑚 𝑛 Theorem 10. Let A ∈ F , B ∈ F ,andC ∈ F .Then 𝑇 𝑇 = ∑ ∑ (e𝑘𝑛 ⊗ e𝑗𝑚)⊗(e𝑗𝑚 ⊗ e𝑘𝑛) 𝑇 col [ABC] =(C ⊗ A) col [B] . (13) 𝑗=1 𝑘=1 𝑚 𝑛 Proof. According to Theorems 9 and 1,wehave 𝑇 𝑇 = ∑ ∑ [e𝑘𝑛 ⊗(e𝑗𝑚 ⊗ e𝑗𝑚)⊗e𝑘𝑛] col [ABC] = col [(AB) C] 𝑗=1 𝑘=1 (19) 𝑇 =(C ⊗ I𝑚) col [AB] 𝑛 𝑚 [ 𝑇 𝑇 ] = ∑ e𝑘𝑛 ⊗ ∑ (e ⊗ e𝑗𝑚)⊗e 𝑇 𝑗𝑚 𝑘𝑛 =(C ⊗ I𝑚)(I𝑝 ⊗ A) col [B] 𝑘=1 [ 𝑗=1 ] =[(C𝑇 ⊗ I )(I ⊗ A)] B 𝑛 𝑚 𝑝 col [ ] 𝑇 = ∑ [e𝑘𝑛 ⊗ I𝑚 ⊗ e𝑘𝑛] 𝑇 =(C ⊗ A) col [B] . 𝑘=1 (14) 𝑇 I𝑚 ⊗ e1𝑛 [ ] [ ] [I ⊗ e𝑇 ] Theorem 10 plays an important role in solving the matrix = [ 𝑚 2𝑛] [ . ] equations [25, 35–37], system identification [38–54], and [ . ] controltheory[55–58]. 𝑇 [I𝑚 ⊗ e𝑛𝑛] Let e𝑖𝑛 denote an 𝑛-dimensional column vector which has 1inthe𝑖th position and 0’s elsewhere; that is, = P𝑚𝑛. 𝑇 e𝑖𝑛 := [0,0,...,0,1,0,...,0] . (15) Define the vec-permutation matrix Based on the definition of the vec-permutation matrix, we 𝑇 have the following conclusions. I𝑚 ⊗ e1𝑛 [ 𝑇 ] [ ] Theorem 11. According to the definition of P𝑚𝑛,onehas [I𝑚 ⊗ e2𝑛] 𝑚𝑛×𝑚𝑛 P𝑚𝑛 := [ ] ∈ R , (16) [ . ] . 𝑇 P𝑇 = P [I𝑚 ⊗ e𝑛𝑛] (1) 𝑚𝑛 𝑛𝑚, which can be expressed as [6, 7, 33, 37] 𝑇 𝑇 (2) P𝑚𝑛P𝑚𝑛 = P𝑚𝑛P𝑚𝑛 = I𝑚𝑛. 𝑚 𝑛 𝑇 ∑ ∑ (e𝑘𝑛 ⊗ e𝑗𝑚)(e𝑗𝑚 ⊗ e𝑘𝑛) . (17) 𝑗=1 𝑘=1 That is, P𝑚𝑛 is an (𝑚𝑛) × (𝑚𝑛) permutation matrix. Journal of Applied Mathematics 5

𝑇 Proof. According to the definition of P𝑚𝑛, Theorem 3,andthe I𝑚 ⊗ e1𝑛 [ ] basic properties of the Kronecker product, we have [ 𝑇 ] 𝑇 [I ⊗ e ] P P =[I ⊗ e , I ⊗ e ,...,I ⊗ e ] [ 𝑚 2𝑛] 𝑚𝑛 𝑚𝑛 𝑚 1𝑛 𝑚 2𝑛 𝑚 𝑛𝑛 [ ] 𝑇 𝑇 [ . ] I𝑚 ⊗ e1𝑛 . [ ] I ⊗ e𝑇 [ 𝑇 ] [ 𝑚 𝑛𝑛] 𝑇 [I ⊗ e ] P = [ 𝑚 2𝑛] 𝑚𝑛 [ ] 𝑛 [ . ] 𝑇 . = I𝑚 ⊗[∑e𝑖𝑛e ] 𝑇 𝑖𝑛 [I𝑚 ⊗ e𝑛𝑛] 𝑖=1

𝑇 𝑇 𝑇 = I𝑚 ⊗ I𝑛 =[I𝑚 ⊗ e1𝑛, I𝑚 ⊗ e2𝑛,...,I𝑚 ⊗ e𝑛𝑛] = I𝑚𝑛. 𝑇 𝑇 𝑇 e1𝑚 ⊗ e1𝑛 e1𝑚 ⊗ e2𝑛 ⋅⋅⋅ e1𝑚 ⊗ e𝑛𝑛 (22) [ ] [ ] [e𝑇 ⊗ e e𝑇 ⊗ e ⋅⋅⋅ e𝑇 ⊗ e ] A ∈ F 𝑚×𝑛 [A]=P [A𝑇] = [ 2𝑚 1𝑛 2𝑚 2𝑛 2𝑚 𝑛𝑛 ] For any matrix ,wehavecol 𝑚𝑛col . [ . . . ] [ . . . ] e𝑇 ⊗ e e𝑇 ⊗ e ⋅⋅⋅ e𝑇 ⊗ e 𝑚×𝑛 𝑝×𝑞 [ 𝑚𝑚 1𝑛 𝑚𝑚 2𝑛 𝑚𝑚 𝑛𝑛] Theorem 12. If A ∈ F and B ∈ F , then one has P𝑚𝑝(A⊗ 𝑇 𝑇 𝑇 𝑇 B)P𝑛𝑞 = B ⊗ A. e1𝑛 ⊗ e1𝑚 e2𝑛 ⊗ e1𝑚 ⋅⋅⋅ e𝑛𝑛 ⊗ e1𝑚 [ ] B1 [ 𝑇 𝑇 𝑇 ] [e ⊗ e e ⊗ e ⋅⋅⋅ e ⊗ e ] B2 = [ 1𝑛 2𝑚 2𝑛 2𝑚 𝑛𝑛 2𝑚 ] [ ] 𝑖 1×𝑞 [ ] Proof. Let B := [𝑏𝑖𝑗 ]= . ,whereB ∈ F and 𝑖= [ . . . ] . . . . [ .𝑝 ] 𝑇 𝑇 𝑇 B [e1𝑛 ⊗ e𝑚𝑚 e2𝑛 ⊗ e𝑚𝑚 ⋅⋅⋅ e𝑛𝑛 ⊗ e𝑚𝑚] 1,2,...,𝑝,and𝑗 = 1,2,...,𝑞. According to the definition of P𝑚𝑛 and the Kronecker product, we have 𝑇 I𝑛 ⊗ e1𝑚 [ ] P (A ⊗ B) P𝑇 [ ] 𝑚𝑝 𝑛𝑞 [I ⊗ e𝑇 ] = [ 𝑛 2𝑚 ] [ ] I ⊗ e𝑇 [ . ] [ 𝑚 1𝑝] . [ ] I ⊗ e𝑇 [ 𝑇 ] [ 𝑛 𝑚𝑚] [I𝑚 ⊗ e2𝑝] 𝑇 = [ ] [(A)1 ⊗ B, (A)2 ⊗ B,...,(A)𝑛 ⊗ B] P𝑛𝑞 [ . ] = P𝑛𝑚, [ . ] I ⊗ e𝑇 (20) [ 𝑚 𝑝𝑝 ] 1 1 1 P P (A)1 ⊗ B (A)2 ⊗ B ⋅⋅⋅ (A)𝑛 ⊗ B 𝑚𝑛 𝑚𝑛 [ 2 2 2] [(A)1 ⊗ B (A)2 ⊗ B ⋅⋅⋅ (A)𝑛 ⊗ B ] = P𝑇 𝑇 [ . . . ] 𝑛𝑞 I𝑚 ⊗ e1𝑛 [ . . . ] [ ] . . . [ ] (A) ⊗ B𝑝 (A) ⊗ B𝑝 ⋅⋅⋅ (A) ⊗ B𝑝 [I ⊗ e𝑇 ] [ 1 2 𝑛 ] = [ 𝑚 2𝑛] [I ⊗ e , I ⊗ e ,...,I ⊗ e ] [ ] 𝑚 1𝑛 𝑚 2𝑛 𝑚 𝑛𝑛 [ . ] A ⊗ B1 . [ 2] I ⊗ e𝑇 [A ⊗ B ] [ 𝑚 𝑛𝑛] = [ ] [I ⊗ e , I ⊗ e ,...,I ⊗ e ] [ . ] 𝑛 1𝑞 𝑛 2𝑞 𝑛 𝑞𝑞 . 𝑇 𝑇 𝑇 𝑝 I𝑚 ⊗(e e1𝑛) I𝑚 ⊗(e e2𝑛)⋅⋅⋅I𝑚 ⊗(e e𝑛𝑛) A ⊗ B [ 1𝑛 1𝑛 1𝑛 ] [ ] [ ] [ 𝑇 𝑇 𝑇 ] [I ⊗(e e ) I ⊗(e e )⋅⋅⋅I ⊗(e e )] A𝑏 A𝑏 ⋅⋅⋅ A𝑏 = [ 𝑚 2𝑛 1𝑛 𝑚 2𝑛 2𝑛 𝑚 2𝑛 𝑛𝑛 ] 11 12 1𝑞 [ . . . ] [A𝑏 A𝑏 ⋅⋅⋅ A𝑏 ] [ . . . ] [ 21 22 2𝑞] . . . = [ . . . ] I ⊗(e𝑇 e ) I ⊗(e𝑇 e )⋅⋅⋅I ⊗(e𝑇 e ) [ . . . ] [ 𝑚 𝑛𝑛 1𝑛 𝑚 𝑛𝑛 2𝑛 𝑚 𝑛𝑛 𝑛𝑛 ] . . . [A𝑏𝑝1 A𝑏𝑝2 ⋅⋅⋅ A𝑏𝑝𝑞] I𝑚 0 ⋅⋅⋅ 0 [ ] = B ⊗ A. [ 0I𝑚 ⋅⋅⋅ 0 ] = [ ] [ . . . ] (23) . . d . [ 00⋅⋅⋅ I𝑚] From Theorem 12,wehavethefollowingcorollaries. = I𝑚𝑛, 𝑚×𝑛 𝑇 (21) Corollary 13. If A ∈ F ,thenP𝑚𝑟(A ⊗ I𝑟)P𝑛𝑟 = I𝑟 ⊗ A. 6 Journal of Applied Mathematics

𝑚×𝑛 𝑛×𝑚 𝑘 𝑖 Corollary 14. If A ∈ F and B ∈ F ,then If 𝜆[A]={𝜆1,𝜆2,...,𝜆𝑚} and 𝑓(𝑥) = ∑𝑖=1 𝑐𝑖𝑥 is a polynomial, then the eigenvalues of B ⊗ A = P (A ⊗ B) P𝑇 = P [(A ⊗ B) P2 ] P𝑇 . 𝑚𝑛 𝑛𝑚 𝑚𝑛 𝑚𝑛 𝑛𝑚 (24) 𝑘 𝑓 (A) = ∑𝑐 A𝑖 2 𝑛×𝑛 𝑖 (31) That is, 𝜆[B ⊗ A] = 𝜆[(A ⊗ B)P𝑚𝑛].WhenA ∈ F and 𝑖=1 B ∈ F 𝑡×𝑡 B ⊗ A = P (A ⊗ B)P𝑇 A B ,onehas 𝑛𝑡 𝑛𝑡.Thatis,if and are are square matrices, then A ⊗ B is similar to B ⊗ A. 𝑘 𝑖 𝑓(𝜆𝑗)=∑𝑐𝑖𝜆𝑗, 𝑗=1,2,...,𝑚. (32) 5. The Scalar Properties and the 𝑖=1 Matrix of the Kronecker Product Similarly, consider a polynomial 𝑓(𝑥, 𝑦) in two variables 𝑥 𝑦 In this section, we discuss the properties [6, 7, 34]ofthe and : determinant, the trace, the rank, and the 𝑘 of the Kronecker product. 𝑓(𝑥,𝑦)= ∑ 𝑐 𝑥𝑖𝑦𝑗,𝑐,𝑥,𝑦∈F, 𝑚×𝑚 𝑛×𝑛 𝑛 𝑚 𝑖𝑗 𝑖𝑗 (33) For A ∈ F and B ∈ F ,wehave|A⊗B|=|A| |B| = 𝑖,𝑗=1 |B ⊗ A|.IfA and B aretwosquarematrices,thenwehave tr[A ⊗ B]=tr[A] tr[B]=tr[B ⊗ A].ForanymatricesA and where 𝑘 is a positive integer. Define the polynomial matrix B,wehaverank[A ⊗ B]=rank[A] rank[B]=rank[B ⊗ A]. 𝑓(A, B) by the formula

According to these scalar properties, we have the following 𝑘 theorems. 𝑖 𝑗 𝑓 (A, B) = ∑ 𝑐𝑖𝑗 A ⊗ B . (34) 𝑚×𝑚 𝑛×𝑛 𝑖,𝑗=1 Theorem 15. (1) Let A, C ∈ F and B, D ∈ F .Then According to Theorem 3,wehavethefollowingtheorems |(A ⊗ B)(C ⊗ D)| = |(A ⊗ B)||(C ⊗ D)| [34]. 𝑛 𝑚 = (|A||C|) (|B||D|) 𝑚×𝑚 𝑛×𝑛 (25) Theorem 17. Let A ∈ F and B ∈ F ;if𝜆[A]= 𝑛 𝑚 {𝜆 ,𝜆 ,...,𝜆 } 𝜆[B]={𝜇,𝜇 ,...,𝜇 } = |AC| |BD| . 1 2 𝑚 and 1 2 𝑛 , then the matrix 𝑓(A, B) has the eigenvalues

(2) If A, B, C,andD are square matrices, then 𝑘 𝑖 𝑗 𝑓(𝜆𝑟,𝜇𝑠)= ∑ 𝑐𝑖𝑗 𝜆𝑟𝜇𝑠 , 𝑟=1,2,...,𝑚, 𝑠=1,2,...,𝑛. tr [(A ⊗ B)(C ⊗ D)] = tr [(AC) ⊗ (BD)] 𝑖,𝑗=1 (35) = tr [AC] tr [BD] (26) 𝑚×𝑚 Theorem 18 (see [34]). Let A ∈ F .If𝑓(𝑧) is an analytic = tr [CA] tr [DB] . function and 𝑓(A) exists, then 𝑚×𝑛 𝑛×𝑝 𝑞×𝑟 𝑟×𝑠 (3) Let A ∈ F , C ∈ F , B ∈ F ,andD ∈ F ;then 𝑓(I𝑛 ⊗ A)=I𝑛 ⊗𝑓(A),

𝑓(A ⊗ I𝑛)=𝑓(A)⊗I𝑛. rank [(A ⊗ B)(C ⊗ D)] = rank [(AC) ⊗ (BD)] (27) Finally, we introduce some results about the Kronecker = rank [AC] rank [BD] . 𝑚×𝑚 𝑛×𝑛 sum [7, 34]. The Kronecker sum of A ∈ F and B ∈ F , 𝑟 𝑠 denoted as A ⊕ B, is defined by Theorem 16. If 𝑓(𝑥, 𝑦) :=𝑥 𝑦 is a monomial and 𝑓(A, B):= [𝑟] [𝑠] A ⊗B ,where𝑟, 𝑠 arepositiveintegers,onehasthefollowing A ⊕ B = A ⊗ I𝑛 + I𝑚 ⊗ B. conclusions. 𝑚×𝑚 𝑛×𝑛 Theorem 19. Let A ∈ F ,andB ∈ F .Then 𝑚×𝑚 𝑛×𝑛 (1) Let A ∈ F and B ∈ F .Then exp[A ⊕ B]=exp[A]⊗exp[B], 󵄨 󵄨 𝑟𝑚𝑟−1𝑛𝑠 𝑠𝑚𝑟𝑛𝑠−1 (A ⊕ B)= (A)⊗ (B)+ (A)⊗ (B) 󵄨𝑓 (A, B)󵄨 = |A| |B| . (28) sin sin cos cos sin , cos(A ⊕ B)=cos(A)⊗cos(B)−sin(A)⊗sin(B). (2) If A and B are square matrices, then 6. Conclusions tr [𝑓 (A, B)]=𝑓(tr [A] , tr [B]) . (29) This paper establishes some conclusions on the Kronecker products and the vec-permutation matrix. A new presen- A B (3) For any matrices and ,onehas tation about the properties of the mixed products and the vector operator is given. All these obtained conclusions make [𝑓 (A, B)] =𝑓( [A] , [B]) . rank rank rank (30) the theory of the Kronecker product more complete. Journal of Applied Mathematics 7

Acknowledgments [16] C. F. van Loan, “The ubiquitous Kronecker product,” Journal of Computational and Applied Mathematics,vol.123,no.1-2,pp. This work was supported by the National Natural Science 85–100, 2000. Foundation of China (no. 61273194), the 111 Project (B12018), [17] M. Huhtanen, “Real linear Kronecker product operations,” and the PAPD of Jiangsu Higher Education Institutions. Linear Algebra and its Applications,vol.418,no.1,pp.347–361, 2006. References [18] S. Delvaux and M. van Barel, “Rank-deficient submatrices of Kronecker products of Fourier matrices,” Linear Algebra and its [1] H. V.Jemderson, F.Pukelsheim, and S. R. Searle, “On the history Applications, vol. 426, no. 2-3, pp. 349–367, 2007. of the Kronecker product,” Linear and Multilinear Algebra,vol. [19] S. G. Deo, K. N. Murty, and J. Turner, “Qualitative properties of 14,no.2,pp.113–120,1983. adjoint Kronecker product boundary value problems,” Applied [2]X.L.Xiong,W.Fan,andR.Ding,“Least-squaresparameter Mathematics and Computation,vol.133,no.2-3,pp.287–295, estimation algorithm for a class of input nonlinear systems,” 2002. Journal of Applied Mathematics, vol. 2007, Article ID 684074, [20] W.-H. Steeb and F. Wilhelm, “Exponential functions of Kro- 14 pages, 2007. necker products and trace calculation,” Linear and Multilinear [3] F. Ding, “Transformations between some special matrices,” Algebra,vol.9,no.4,pp.345–346,1981. Computers & Mathematics with Applications,vol.59,no.8,pp. [21] J. Chuai and Y. Tian, “Rank equalities and inequalities for 2676–2695, 2010. Kronecker products of matrices with applications,” Applied [4] Y. Shi and B. Yu, “Output feedback stabilization of networked Mathematics and Computation,vol.150,no.1,pp.129–137,2004. control systems with random delays modeled by Markov [22] R. H. Koning, H. Neudecker, and T. Wansbeek, “Block Kro- chains,” IEEE Transactions on Automatic Control,vol.54,no.7, necker products and the vecb operator,” Linear Algebra and its pp.1668–1674,2009. Applications,vol.149,pp.165–184,1991. [5]Y.Shi,H.Fang,andM.Yan,“Kalmanfilter-basedadaptive [23] F. Ding, P. X. Liu, and J. Ding, “Iterative solutions of the control for networked systems with unknown parameters and generalized Sylvester matrix equations by using the hierarchical randomly missing outputs,” International Journal of Robust and identification principle,” Applied Mathematics and Computa- Nonlinear Control, vol. 19, no. 18, pp. 1976–1992, 2009. tion,vol.197,no.1,pp.41–50,2008. [6] A. Graham, Kronecker Products and Matrix Calculus: With [24] L. Xie, Y. Liu, and H. Yang, “Gradient based and least squares Applications, John Wiley & Sons, New York, NY, USA, 1982. based iterative algorithms for matrix equations 𝐴𝑋𝐵+𝐶𝑋T𝐷= [7] W.-H. Steeb and Y. Hardy, Matrix Calculus and Kronecker 𝐹,” Applied Mathematics and Computation,vol.217,no.5,pp. Product: A Practical Approach to Linear and Multilinear Algebra, 2191–2199, 2010. World Scientific, River Edge, NJ, USA, 2011. [25] F. Ding and T. Chen, “Gradient based iterative algorithms [8]P.M.BentlerandS.Y.Lee,“Matrixderivativeswithchain forsolvingaclassofmatrixequations,”IEEE Transactions on rule and rules for simple, Hadamard, and Kronecker products,” Automatic Control,vol.50,no.8,pp.1216–1221,2005. Journal of Mathematical Psychology,vol.17,no.3,pp.255–262, [26] J. Ding, Y. Liu, and F. Ding, “Iterative solutions to matrix 1978. equations of the form 𝐴𝑖𝑋𝐵𝑖 =𝐹𝑖,” Computers & Mathematics [9] J. R. Magnus and H. Neudecker, “Matrix differential calculus with Applications, vol. 59, no. 11, pp. 3500–3507, 2010. with applications to simple, Hadamard, and Kronecker prod- [27] A.-G. Wu, L. Lv, and G.-R. Duan, “Iterative algorithms for ucts,” Journal of Mathematical Psychology,vol.29,no.4,pp.474– solving a class of complex conjugate and transpose matrix 492, 1985. equations,” Applied Mathematics and Computation,vol.217,no. [10] F. Ding and T. Chen, “Iterative least-squares solutions of 21, pp. 8343–8353, 2011. coupled Sylvester matrix equations,” Systems & Control Letters, [28]A.-G.Wu,X.Zeng,G.-R.Duan,andW.-J.Wu,“Iterativesol- vol. 54, no. 2, pp. 95–107, 2005. utions to the extended Sylvester-conjugate matrix equations,” [11] F. Ding and T. Chen, “On iterative solutions of general coupled Applied Mathematics and Computation,vol.217,no.1,pp.130– matrix equations,” SIAM Journal on Control and Optimization, 142, 2010. vol. 44, no. 6, pp. 2269–2284, 2006. [29]F.Zhang,Y.Li,W.Guo,andJ.Zhao,“Leastsquaressolutions [12] L. Jodar´ and H. Abou-Kandil, “Kronecker products and coupled with special structure to the linear matrix equation 𝐴𝑋𝐵 = matrix Riccati differential systems,” Linear Algebra and its 𝐶,” Applied Mathematics and Computation,vol.217,no.24,pp. Applications,vol.121,no.2-3,pp.39–51,1989. 10049–10057, 2011. [13] D. Bahuguna, A. Ujlayan, and D. N. Pandey, “Advanced type [30] M. Dehghan and M. Hajarian, “SSHI methods for solving gen- coupled matrix Riccati differential equation systems with Kro- eral linear matrix equations,” Engineering Computations,vol.28, necker product,” Applied Mathematics and Computation,vol. no. 8, pp. 1028–1043, 2011. 194,no.1,pp.46–53,2007. [31] E. Erkmen and M. A. Bradford, “Coupling of finite element [14] M. Dehghan and M. Hajarian, “An iterative algorithm for and meshfree methods be for locking-free analysis of shear- solving a pair of matrix equations 𝐴𝑌𝐵=𝐸, 𝐶𝑌𝐷 =𝐹 deformable beams and plates,” Engineering Computations,vol. over generalized centro-symmetric matrices,” Computers & 28,no.8,pp.1003–1027,2011. Mathematics with Applications,vol.56,no.12,pp.3246–3260, [32] A. Kaveh and B. Alinejad, “Eigensolution of Laplacian matrices 2008. for graph partitioning and domain decomposition approximate [15] M. Dehghan and M. Hajarian, “An iterative algorithm for the algebraic method,” Engineering Computations,vol.26,no.7,pp. reflexive solutions of the generalized coupled Sylvester matrix 828–842, 2009. equations and its optimal approximation,” Applied Mathematics [33] X. Z. Zhan, The Theory of Matrces, Higher Education Press, Bei- and Computation,vol.202,no.2,pp.571–588,2008. jing, China, 2008 (Chinese). 8 Journal of Applied Mathematics

[34] P. Lancaster and M. Tismenetsky, The Theory of Matrices: with [51] F.Ding and Y.Gu, “Performance analysis of the auxiliary model- Applications, Academic Press, New York, NY, USA, 1985. based least-squares identification algorithm for one-step state- [35] M. Dehghan and M. Hajarian, “An iterative method for solving delay systems,” International Journal of Computer Mathematics, the generalized coupled Sylvester matrix equations over gener- vol. 89, no. 15, pp. 2019–2028, 2012. alized bisymmetric matrices,” Applied Mathematical Modelling, [52] F.Ding and Y.Gu, “Performance analysis of the auxiliary model- vol. 34, no. 3, pp. 639–654, 2010. based stochastic gradient parameter estimation algorithm for [36] M. Dehghan and M. Hajarian, “An efficient algorithm for solv- state space systems with one-step state delay,” Circuits, Systems ing general coupled matrix equations and its application,” and Signal Processing,vol.32,no.2,pp.585–599,2013. Mathematical and Computer Modelling,vol.51,no.9-10,pp. [53] F.Ding and H. H. Duan, “Two-stage parameter estimation algo- 1118–1134, 2010. rithms for Box-Jenkins systems,” IET Signal Processing,2013. [37] N. J. Higham, AccuracyandStabilityofNumericalAlgorithms, [54] P. P. Hu and F. Ding, “Multistage least squares based iterative Society for Industrial and Applied Mathematics, Philadelphia, estimation for feedback nonlinear systems with moving average Pa,USA,1996. noises using the hierarchical identification principle,” Nonlinear Dynamics,2013. [38] F. Ding, “Decomposition based fast least squares algorithm for output error systems,” Signal Processing,vol.93,no.5,pp.1235– [55] H. G. Zhang and X. P. Xie, “Relaxed stability conditions for 1242, 2013. continuous-time TS fuzzy-control systems via augmented multi-indexed matrix approach,” IEEE Transactions on Fuzzy [39] F. Ding, “Coupled-least-squares identification for multivariable Systems,vol.19,no.3,pp.478–492,2011. systems,” IET Control Theory and Applications,vol.7,no.1,pp. 68–79, 2013. [56]H.G.Zhang,D.W.Gong,B.Chen,andZ.W.Liu,“Syn- chronization for coupled neural networks with interval delay: [40]F.Ding,X.G.Liu,andJ.Chu,“Gradient-basedandleast- a novel augmented Lyapunov-Krasovskii functional method,” squares-based iterative algorithms for Hammerstein systems IEEE Transactions on Neural Networks and Learning Systems, using the hierarchical identification principle,” IET Control vol.24,no.1,pp.58–70,2013. Theory and Applications,vol.7,pp.176–184,2013. [57]H.W.YuandY.F.Zheng,“Dynamicbehaviorofmulti-agent [41] F. Ding, “Hierarchical multi-innovation stochastic gradient systems with distributed sampled control,” Acta Automatica algorithm for Hammerstein nonlinear system modeling,” Sinica,vol.38,no.3,pp.357–363,2012. Applied Mathematical Modelling,vol.37,no.4,pp.1694–1704, [58] Q. Z. Huang, “Consensus analysis of multi-agent discrete-time 2013. systems,” Acta Automatica Sinica, vol. 38, no. 7, pp. 1127–1133, [42] F. Ding, “Two-stage least squares based iterative estima- 2012. tion algorithm for CARARMA system modeling,” Applied Mathemat- Ical Modelling,vol.37,no.7,pp.4798–4808,2013. [43] Y. J. Liu, Y. S. Xiao, and X. L. Zhao, “Multi-innovation stochastic gradient algorithm for multiple-input single-output systems using the auxiliary model,” Applied Mathematics and Compu- tation,vol.215,no.4,pp.1477–1483,2009. [44] Y. J. Liu, J. Sheng, and R. F. Ding, “Convergence of stochastic gradient estimation algorithm for multivariable ARX-like sys- tems,” Computers & Mathematics with Applications,vol.59,no. 8, pp. 2615–2627, 2010. [45] J. H. Li, “Parameter estimation for Hammerstein CARARMA systems based on the Newton iteration,” Applied Mathematics Letters,vol.26,no.1,pp.91–96,2013. [46] J. H. Li, R. F. Ding, and Y. Yang, “Iterative parameter identifi- cation methods for nonlinear functions,” Applied Mathematical Modelling,vol.36,no.6,pp.2739–2750,2012. [47] J.Ding,F.Ding,X.P.Liu,andG.Liu,“Hierarchicalleastsquares identification for linear SISO systems with dual-rate sampled- data,” IEEE Transactions on Automatic Control,vol.56,no.11, pp. 2677–2683, 2011. [48] J. Ding and F. Ding, “Bias compensation-based parameter esti- mation for output error moving average systems,” International JournalofAdaptiveControlandSignalProcessing,vol.25,no.12, pp. 1100–1111, 2011. [49]J.Ding,L.L.Han,andX.M.Chen,“TimeseriesARmodeling with missing observations based on the polynomial transforma- tion,” Mathematical and Computer Modelling,vol.51,no.5-6,pp. 527–536, 2010. [50] F.Ding,Y.J.Liu,andB.Bao,“Gradient-basedandleast-squares- based iterative estimation algorithms for multi-input multi- output systems,” Proceedings of the Institution of Mechanical Engineers I, vol. 226, no. 1, pp. 43–55, 2012. Copyright of Journal of Applied Mathematics is the property of Hindawi Publishing Corporation and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.