<<

A New Approach to Multilinear Dynamical Systems and Control*

Randy C. Hoover1, Kyle Caudle2 and Karen Braman2

Abstract—The current paper presents a new approach to mul- algorithms have played a central role in extending many of the tilinear dynamical systems analysis and control. The approach existing machine learning algorithms [1]–[12]. However, as is based upon recent developments in tensor decompositions their popularity has gained more traction over the last decade, and a newly defined algebra of circulants. In particular, it is shown that under the right tensor multiplication operator, a they have made their way into the dynamical systems and third order tensor can be written as a product of third order controls community as well [13]–[25]. While most applica- tensors that is analogous to a traditional eigenvalue de- tions of Tucker/CP in the dynamical systems and controls composition where the “eigenvectors” become eigenmatrices and community revolve around the reduction of certain classes the “eigenvalues” become eigen-tuples. This new development of nonlinear systems to multilinear counterparts [15]–[21], allows for a proper tensor eigenvalue decomposition to be defined and has natural extension to linear systems theory through a [25], others have focused on time-series modeling [24], [26], tensor-exponential. Through this framework we extend many fuzzy inference [23], or identification/modeling of inverse of traditional techniques used in linear system theory to their dynamics [22]. multilinear counterpart. In the current paper, we describe a new approach to multilinear dynamical systems analysis and control through I.INTRODUCTION Fourier theory and an algebra of circulants as outlined in [27]– Traditional approaches to the analysis and control of lin- [31]. It is shown that under the right tensor multiplication ear time invariant (LTI) systems is well known and well operator, a third order tensor can be written as a product of understood. However as systems become increasingly com- third order tensors in which the left tensor is a collection plex, and multi-dimensional measurement devices become of eigenmatrices, the middle tensor is a front-face diagonal more commonplace, extensions from the linear system to a (denoted as f-diagonal) tensor of eigen-tuples, and the right multilinear system framework needs to be developed. While tensor is the tensor inverse of the eigenmatrices resulting there have been several approaches developed to investigate in a tensor-tensor eignevalue decomposition that is similar multilinear dynamical systems, most rely on decompositions to its matrix counterpart. Moreover, using the aformentioned revolving around either the Tucker or Canonical Decompo- decomposition, [32], [33] illustrates that a multilinear system sition/Parallel Factors (commonly referred to collectively as of ordinary differential equations (MODEs) can be effectively the CP decomposition). Tucker/CP provides a framework for solved via a tensor-version of the (referred decomposing a high order tensor into a collection of factor to as the t-exponential). matrices multiplying a “core tensor”. The structure of the Building on the work of [27]–[33], the contributions of core-tensor depends on which factorization strategy is being the current paper are four fold: (1) we extend the results used (Tucker produces a “dense” core whereas CP produces of [32], [33] to include the zero-state response to the mul- a “diagonal” core). Regardless of the decomposition being tilinear dynamical system in an effort to introduce multi- applied, both are regarded as form of higher order singular linear feedback control, (2) we develop a stability criterion value decomposition [1]–[4]. for the multilinear dynamical system to include exponential As a form of high-order singular value decomposition, convergence of system trajectories, (3) we introduce a new arXiv:2108.13583v1 [cs.LG] 31 Aug 2021 Tucker/CP algorithms have a natural fit within the machine approach to validate controllability of multilinear systems learning community where data naturally arises as two- using a block-Krylov subspace condition, and finally (4) we dimensional structures, e.g. digital image data. As such, these present a method to design multilinear state-feedback control using the developments of (1) - (3). *The current research was supported in part by the Department of the The remainder of this paper is organized as follows: In Navy, Naval Engineering Education Consortium under Grant No. (N00174- Section II we discuss the relevant tensor algebra and the 19-1-0014), the NASA Space Grant Consortium and the National Science Foundation under Grant No. (2007367). Any opinions, findings, and conclu- newly defined tensor multiplication operator. In Section III sions or recommendations expressed in this material are those of the authors we present the tensor-tensor eigenvalue decomposition and and do not necessarily reflect the views of the Naval Engineering Education show how it can be used to define functions on tensors Consortium, NASA or the National Science Foundation. 1Randy C. Hoover is with the department of Computer Sci- (namely the tensor exponential). In Section IV we provide ence and Engineering, South Dakota Mines, Rapid City, SD, USA several extensions of traditional linear systems theory to their [email protected] multilinear counterpart. Section V we provide an illustrative 2Kyle Caudle and Karen Braman the Department Math- ematics, South Dakota Mines, Rapid City, SD, USA example of the newly developed theory and finally, Section VI {kyle.caudle,karen.braman}@sdsmt,edu presents some discussion and provides some insight into future research directions. If A ∈ R`×m×n with ` × m frontal slices then  A(1) A(n) A(n−1) ...A(2)  II.MATHEMATICAL FOUNDATIONS OF TENSORS (2) (1) (n) (3)  A A A ...A    bcirc(A) =  . . . . .  , In the current section we discuss the mathematical founda-  ......  tions of the tensor decompositions used in the current work.   (n) (n−1) .. (2) (1) While most of the theory in this section is outlined in [10], A A . A A [27]–[29], [32], [33], we summarize this theory here to keep is a block circulant matrix of size `n × mn. the current work self contained. We anchor the MatVec command to the frontal slices of The term tensor, as used in the context of this paper, refers the tensor. MatVec(A) takes an ` × m × n tensor and returns to a multi-dimensional array of numbers, sometimes called a block `n × m matrix `×m×n an n-way or n-mode array. If, for example, A ∈ R  A(1)  then we say A is a third-order tensor where order is the (2)  A  number of ways or modes of the tensor. Thus, matrices and MatVec(A) =   .  .  vectors are second-order and first-order tensors, respectively.  .  Fundamental to the results presented in this paper is a recently A(n) defined multiplication operation on third-order tensors which The operation that takes MatVec(A) back to tensor form is itself produces a third-order tensor [27], [28]. the fold command: Further, it has been shown in [29] that under this multiplica- `×m×n fold(MatVec(A)) = A. tion operation, R is a free module over a commutative ring with unity where the “scalars” are R1×1×n tuples. In With these two operations in hand, we introduce the t- addition, it has been shown in [29] and [30] that all linear product between two, third-order tensors [27], [28]: transformations on the space R`×m×n can be represented by multiplication by a third-order tensor. Thus, even though Definition 2. Let A ∈ R`×p×n and B ∈ Rp×m×n be two third `×m×n R`×m×n is not strictly a , many of the familiar order tensors. Then the t-product A ∗ B ∈ R is defined tools of matrix can be applied in this new as context, including the basic building blocks for dynamical A ∗ B = fold (bcirc(A) · MatVec(B)) . systems and control of multilinear systems. For a more in depth discussion on this topic, the reader is referred to [29]. Note that the tensor t-product enables the multiplication First, we review the basic definitions from [28] and [27] of two third order tensors via mod-n circular . and introduce some basic notation. It will be convenient to Moreover, in general, the t-product of two tensors will not `×m×n break a tensor A in R up into various slices and tubal commute, with the exception in which ` = p = m = 1, i.e., th elements, and to have an indexing on those. The i lateral when the tensors are tubal-scalars. As a matter of illustration, th slice will be denoted Ai whereas the j frontal slice will be Example 1 details the application of the t-product on two (j) denoted A . In terms of MATLAB indexing notation, this third order tensors. (j) means Ai ≡ A(:, i, :) while A ≡ A(:, :, j). th We use the notation aik to denote the i, k tube inA; that Example 1: Suppose A ∈ R`×p×3 and B ∈ Rp×m×3. th (j) is aik = A(i, k, :). The j entry in that tube is aik . Indeed, Then these tubes have special meaning for us in the present work,  A(1) A(3) A(2)   B(1)  as they will play a role similar to scalars in . Thus, we make R A ∗ B = fold  A(2) A(1) A(3)   B(2)  . the following definition: A(3) A(2) A(1) B(3) 1×1×n Definition 1. An element c ∈ is called a tubal-scalar m×m×n R Definition 3. The identity tensor I ∈ R is the tensor of length n. whose frontal slice is the m × m , and whose As mentioned previously, the set of tubal-scalars with length other frontal slices are all zeros. n endowed with element-wise addition and tensor multiplica- Definition 4. If A is ` × m × n, then the tensor tion (defined by the t-product in Def. 2) forms a commutative AT is the m × ` × n tensor obtained by transposing each of ring [29]. For ease of notation, we will use 0 to denote the the frontal slices and then reversing the order of transposed additive identity, i.e., the tubal-scalar with all zero elements. frontal slices 2 through n. Let e denote the tubal-scalar with all zero elements except 1 n×n×` a 1 in the first position. Then it is easy to see that e is the Definition 5. A tensor A ∈ R has an tensor inverse 1 n×n×` multiplicative identity in this ring and it will play an important B ∈ R provided role in the remaining tensor definitions. A ∗ B = I and B ∗ A = I In order to discuss multiplication between two tensors we n×n×` `×m×n where I ∈ R . The tensor inverse is computed as must first introduce the concept of converting A ∈ R into a block circulant matrix. A−1 = fold(bcirc(A)−1). III.TENSOR EIGENVALUE DECOMPOSITIONAND where each of the Di are n × n, I is an ` × ` identity matrix, FUNCTIONSOF TENSORS Fn is the n × n DFT matrix, In this section we present the tools required to extend tra-   ditional linear systems theory and control to their multilinear 1 1 1 1 1 2 n−1 systems domain. Namely, the computation of an eigenvalue-  1 ω ω ··· ω  1  2 4 2(n−1)  like decomposition for third order tensors. Such a decompo-  1 ω ω ··· ω  Fn = √   , sition provides a natural interpretation to defining functions n  ......   . . . . .  of tensors [32], [33], canonical forms [34], and multilinear 1 ωn−1 ω2(n−1) ··· ω(n−1)(n−1) time-series analysis [14]. (3) −2πi/n th ∗ A. Computation of the t-eigenvalue decomposition where ω = e is a primitive n , Fn is its conjugate transpose, and ⊗ is the Kronecker product. To In an effort to extend traditional linear-time invariant sys- construct the t-eig defined in (1), the matrix eigenvalue decom- tems analysis and control to their multilinear counterparts, we position is performed on each of the D , i.e., D = P Λ P −1 present the tensor-tensor eigenvalue decomposition. In [11], i i i i i resulting in the decomposition [12], [29] the authors show that, for A ∈ n×n×`, there exists   R D1 an n × n × ` tensor P and an n × n × ` f-diagonal (front-face  ..  = diagonal) tensor D such that  .  D −1 ` A = P ∗ D ∗ P =⇒ A ∗ P = D ∗ P =⇒ A ∗ Pj = Pjdj. (1)  P   Λ   P −1  Moreover, the overall “structure” of the deomposition is simi- 1 1 1  ..   ..   ..  lar to a matrix eigenvalue decomposition in that we get a new  .   .   .  . P −1 tensor who’s lateral slices are analogous to eigenvectors P` Λ` P` (referred to as eigenmatrices) and an f-diagonal tensor D who’s (4) ∗ “tubal scalars” dj = D(j, j, :) are analogous to eigenvalues Applying (Fn ⊗ In) to the left and (Fn ⊗ In) to the right (referred to as eigentuples). Throughout this paper, we refer of each of the block diagonal matrices on the right hand side to this decomposition as the t-eig, a graphical illustration of of (4) results in each being block circulant, i.e., if we define the which is shown in Figure 1. Pˆ as the block with Pi as its diagonal blocks, Computation of the t-eig comes from the constructive proof then outlined in [11], [12], [29], that will be restated here for   completeness. It is well known in matrix theory that a circulant P1 P` ··· P`−1 matrix can be diagonalized via left and right multiplication by  P2 P1 ··· P`−2  ∗ ˆ   a discrete Fourier transform (DFT) matrix. Similarly, a block (Fn ⊗ I`)P (Fn ⊗ I`) = . . . . .  . . .. .  circulant matrix can be block diagonalized via left and right   P P ...P multiplication by a block diagonal DFT matrix. For example, ` `−1 1 consider the tensor A ∈ Rn×n×`, then Taking the first block column of each block circulant matrix   D1 and applying the fold operator results in the decomposition  D2  P ∗D∗P−1. Note that for simplicity, as well as computational (F ⊗ I )bcirc(A)(F ∗ ⊗ I ) =   , n ` n `  ..  efficiency, this entire process can by performed using the fast  .  Fourier transform in place of the DFT matrix as illustrated D` (2) in [11], [12], [27]–[29].

Fig. 1. Graphical illustration of the t-eig of an n × n × ` tensor. B. Functions of tensors where x(t) ∈ Rn, A ∈ Rn×n and B ∈ Rn×p. Solutions to the The results of the preceding subsection illustrate that, sim- system defined in (8) are given by Z t ilar to a traditional matrix eigenvalue decomposition, the t- At A(t−τ) eigenvalue decomposition in conjunction with the t-product x(t) = e x(0) + e Bu(τ)dτ, (9) 0 enables us to decompose a third order tensor into the product At of three third order tensors. In [32], [33], it is shown that where e is the well known matrix exponential. traditional functions of matrices can be extended to functions Moreover, given the satisfaction of certain Krylov subspace conditions, namely, the controllability matrix of tensors using the decomposition defined above. Toward this 2 (n−1) n×n×` C = [B, AB, A B, ··· ,A B] has full row (i.e., end, let A ∈ C and f(·): C → C be defined on the spectrum of bcirc(A), and A has an eigendecomposition as rank(C) = n), the closed-loop eigenvalues of the system in (8) defined by the t-eig, then the following hold1: can be arbitrarily assigned via control input u(t) = −Kx(t) 1) f(A) commutes with A; with proper choice of K, subject to complex eigenvalues 2) f(A∗) = f(A)∗; appearing as conjugate pairs. In the sense of stability for LTI 3) f(P ∗ A ∗ P−1) = P ∗ f(A) ∗ P−1; and systems, we require that the closed-loop eigenvalues λi of the matrix (A − BK) be contained in the left-half complex 4) f(D) ∗ Pi = Pi ∗ f(di) ∀ i = 1, . . . , n. plane, i.e., {λi ∈ C | <(λi) < 0}. Using 4), it’s easy to show that f(A) can be computed as B. From linear to multilinear   In [32], [33] it is shown that the zero-input system (a.k.a. f(d1) the homogeneous system) of multilinear ordinary differential f(A) = P ∗  ..  ∗ P−1, (5)  .  equations (ODEs) given by f(d ) ` dX (t) = X˙ (t) = A ∗ X (t), (10) or alternatively, using 3) with eq. (2), dt ∗ (Fn ⊗ I`)f(bcirc(A))(Fn ⊗ I`) = has the solution given by   At f(D1) X (t) = e ∗ X (0), (11)  f(D2)    , (6) where X ∈ n×s×p, A ∈ n×n×p, eAt is the tensor  ..  R R  .  exponential computed as above with f(A) = eAt and ∗ is f(D`) the t-product. It should be noted that within this construct, in conjunction with the definitions outlined in Section II, the where we note that f(D ) is defined by the traditional function i system outlined in (10) can be re-written via the fold and of a matrix. Moreover, the product of f(A) with some tensor bcirc operators as [32] B ∈ Cn×p×` is computed as  X(1)   X(1)  f(A) ∗ B = fold(f(bcirc(A)) ∗ MatVec(B)), (7) d  .  = bcirc(A)  .  , (12) dt  .   .  which will become particularly useful when computing solu- X(n) X(n) tions to multilinear ordinary differential equations. with solutions  (1)   (1)  IV. MULTILINEAR SYSTEM THEORY X (t) X (0)  .  bcirc(A)t  .  With the preceding definitions of the t-product, t-eig and  .  = e  .  , (13) functions of tensors in hand, we are in the position to develop X(n)(t) X(n)(0) a new approach to multilinear dynamical systems analysis bcirc(A)t and control. We proceed by briefly re-stating traditional linear where we note that the computation of e is per- system theory for completeness and presenting a subset of formed via the standard matrix exponential. bcirc(A)t their multilinear extensions. Using the computation of e obtained in (13), we can extend the results in [32] to include both the zero-input A. Linear systems theory (homogeneous system) and zero-state solution (forced system). Indeed, given the multilinear system defined by In the interest of completeness, we briefly outline a few well known results from traditional linear time-invariant (LTI) X˙ (t) = A ∗ X (t) + B ∗ U(t), (14) system theory. Consider the system of ordinary differential where X ∈ n×s×`, A ∈ n×n×`, B ∈ n×q×`, and U ∈ equations R R R Rq×s×`, the solution to such a system is given by x˙ (t) = Ax(t) + Bu(t) (8) Z t X (t) = eAt ∗ X (0) + eA(t−τ) ∗ B ∗ U(τ)dτ . (15) 1 | {z } 0 A detailed proof can be found in [32] and is omitted here for brevity. zero-input | {z } zero-state As illustrated in (13), the zero-input solution can be com- of each Di (always assumes the eigenvalues are sorted puted via block-circulant expansion of the tensor A. Following in descending order). Similarly, the second eigentuple −1 1 2 ` th the same logic, the zero-state solution can be obtained via d2 = F {λ2, λ2, . . . , λ2} and the k eigentuple is −1 1 2 ` similar expansion using the fold and bcirc operators. computed as dk = F (λk, λk, . . . , λk). In other words, the Namely, Fourier transform of the eigentuples di, for i = 1, 2, . . . , ` are Z t A(t−τ) the collection of eigenvalues of the Di, for i = 1, 2, . . . , `, e ∗ B ∗ U(τ)dτ = i 0 where F{d1} produces the first (largest) eigenvalue λ1 i Z t in each Di, F{d2} produces the second eigenvalue λ2,  bcirc(A)(t−τ)  e · MatVec(B) · MatVec(U(τ))dτ, etc. =⇒ the complex-plane maps to the eigentuples di 0 (16) through the inverse Fourier transform. Therefore, for stability, {F{d } ∈ | <{F{d }} < 0} i = 1, 2, . . . , ` where the · notation represents matrix/matrix or matrix/vector we require i C i for . multiplication depending on the dimensions of B and U. C. Stability of the MLTI system D. Feedback control and eigentuple re-assignment Evaluating stability of the MLTI system is a bit more With the notion of stability for MLTI systems in hand, complex than evaluating stability of the LTI system. This we now present a method to control the system through stems from the fact that the eigenvalue decomposition multilinear state feedback. Similar to traditional linear state of the MLTI system results in a set of eigentuples as feedback, prior to designing a state feedback control law we require certain controllability conditions. Namely, we need opposed to eigenvalues. Moreover, it’s difficult (if not ˆ impossible) to define what the negative real-part of the to impose a rank condition on the controllability tensor C constructed as eigentuple di means. Therefore, we approach stability from construction of the tensor exponential itself as opposed to Cˆ = B, A ∗ B, A2 ∗ B, ··· , An−1 ∗ B , eigentuple evaluation. Toward this end, we have the following: where A ∈ Rn×n×`, B ∈ Rn×q×` and ∗ is the t-product. The Claim: The trajectories X (t) of the MLTI system defined rank condition we seek was defined in [27]–[29] (referred to in (10) are exponentially stable (i.e., X (t) → 0 as t → 0 as “tubal-rank”) and stems from the fact that evaluating a and ||X (t)|| ≤ e−αt for some positive α and t > 0) if the zero eigentuple (or singular tuple) is fundamentally different than evaluating an eigenvalue (or singular value). Toward this eigenvalues of each Di outlined in (2) have negative real parts. end, the tubal rank of a tensor is defined as:

tubal-rank [27]–[29]: Suppose a ∈ R1×1×n is a tubal scalar. Proof: Using (6), we can always re-write eAt in Fourier space Then its tubal-rank is the number of its non-zero Fourier as coefficients. If its tubal-rank is n, we say it is invertible, if At bcirc(A)t ∗ it is less than n, it is not. In particular, the tubal-rank is 0 iff e = (Fn ⊗ I`) ∗ e ∗ (Fn ⊗ I`) (17) a = 0. where each f(Di) on the right hand side of (6) is computed To ensure controllability of the MLTI system, we need all as eDit. Therefore the trajectories of (10) can also be written singular tuples of Cˆ to be non-zero, where the singular tuples as of a tensor are computed similar to the eigentuples [27]–[29]. bcirc(A)t ∗ Alternatively, we can check the controllability by evaluating X (t) = (Fn ⊗ I`) ∗ e ∗ (F ⊗ I`) ∗ X (0). (18) n the rank of the block controllability matrix If the eigenvalues of each D have negative real part then i ˆ  2 n−1  Dit bcirc(C) = Bv, Ac ·Bv, Ac ·Bv, ··· , Ac ·Bv (19), f(Di) = e converges exponentially to the origin as t → ∞ bcirc(A)t which implies e and as a result X (t) does also. where we define Bv = MatVec(B) and Ac = bcirc(A) Moreover, because the trajectories are defined by the matrix for notational convenience. For complete controllability, we exponential eDit, ||X (t)|| ≤ e−αt as t → ∞ where α is the require rank(bcirc(Cˆ)) = `n. smallest eigenvalue of all Di, for i = 1, 2, . . . , ` MLTI State Feedback (theory): Given the MLTI system defined in (10), an assuming the controllability condition For completeness, we provide an alternative stability defined above is satisfied, then using the control input U(t) = argument by analyzing the relationship between the −K ∗ X (t), where K ∈ Rq×n×`, we can place the closed-loop ˆ eigentuples di, and the eigenvalues of the individual Di eigentuples di arbitrarily as long as the complex elements of ˆ matrices outlined in (6). Let the eigenvalues associated with di are assigned in conjugate pairs. As a result, the trajectories i Di be denoted by λj for i = 1, 2, . . . , ` and j = 1, 2, . . . , n. of the new system will satisfy the multilinear ODEs given by d = F −1{λ1, λ2, . . . , λ` } Then by (2) the first eigentuple 1 1 1 1 ˙ where F −1{·} is the inverse Fourier transform of {·}, X (t) = (A − B ∗ K) ∗ X (t), (20) i.e., d1 is computed as the inverse Fourier transform where the eigentuples of the feedback tensor (A − B ∗ K) are of the sequence constructed from the first eigenvalue arbitrarily assigned. MLTI State Feedback (construction): While the above theory and desire our closed loop eigenvalues of each Di to be placed 1 2 is technically sound, in practice choosing the feedback tensor at λ1,2 = −2 ± j5 and λ1,2 = −10 ± j10, resulting in K1 = K such that the closed-loop tensor (A − B ∗ K) has a desired [27 − 27] and K2 = [16.35 − 4.35]. Finally, “stacking” (1) (2) set of eigentuples is challenging. This partially stems from the each Ki into a tensor K¯ as K¯ = K1 and K¯ = K2 and fact that characteristic for tensors and/or com- performing the Fourier transform on K¯ yields panion forms are still ongoing research efforts by the authors. K(1) =  43.35 −31.35  and K(2) =  10.64 −22.64  , Therefore, using traditional approaches borrowed from linear systems theory (casting to control canonical form through a and letting U(t) = −K ∗ X (t) produces the desired closed- similarity transformation or developing a desired characteristic loop response. The trajectories for both the open-loop system ) fall short. As a result, we turn once again to (2) and closed-loop system are illustrated in Figures 2 and 3 and note that the Di on the right hand side of (2) contain respectively. exactly the spectrum of bcirc(A) and by definition (although trough a Fourier transform mapping) determine the eigentuples di of A. As a result, rather than attempting to re-assign the eigentuples directly, in practice, it’s more convenient to assign the eigenvalues of each Di through traditional linear systems theory, i.e., construct the new matrix (Di − Bi · Ki) for i = 1, 2, . . . , `, and map those eigenvalues to the desired ˆ 2 closed-loop eigentuples di through the Fourier transform .

V. ILLUSTRATIVE EXAMPLE To help solidify the theory developed in Section IV, we present an illustrative example here. Consider the system of (14) given as X˙ (t) = A ∗ X (t) + B ∗ U(t), (21) with A ∈ R2×2×2 who’s frontal slices are given by  −6 6   0 2  Fig. 2. Illustration of the open-loop trajectories of the system defined in (21 A(1) = and A(2) = , (22) −10 0 8 2 with no control applied, i.e., U(t) = 0.). and B ∈ R2×1×2 who’s frontal slices are given by  1  B(1) = B(2) = , (23) 1 and our state-matrix X (t) ∈ R2×1×2 (analogous to a state- vector) is given as  x (t)   x (t)  X (1)(t) 1 and X (2)(t) 3 . (24) x2(t) x4(t) Performing the Fourier transform on the eigentuples re- turned by t-eig(A) yields open-loop eigentuples (governing the system trajectories - i.e., the eigenvalues of the Di matrices outlined in (2)) at  −3.414   −4 + j7.07  d¯ = and d¯ = , 1 −0.586 2 −4 − j7.07 ¯ 1×1×2 where we’ve used di to denote that di ∈ R is a tubal- Fig. 3. Illustration of the closed-loop trajectories of the system defined in (21 scalar. Although the open-loop system is stable, we aim to with the control input U(t) = −K ∗ X (t). perform eigenvalue re-assignment to improve the closed-loop characteristics. Toward this end, using (2) we compute the VI.CONCLUSIONSAND FUTURE DIRECTIONS matrices Di to be This paper presented a new approach to analysis and design     −6 7 −6 3 of multilinear systems theory and control. The approach is D1 = and D2 = , −2 2 −18 −2 based on a recently developed tensor product and tensor eigen- value decomposition that lays the foundation for solutions 2 n×q Here we note that Bi ∈ R is the first block of MatVec(B) to multilinear dynamical systems through the definition of a tensor-exponential. Using this formulation, we extend tradi- [17] T. Muller,¨ K. Kruppa, G. Lichtenberg, and N. Rehault,´ “Fault detection tional linear systems theory and control to their multilinear with qualitative models reduced by tensor decomposition methods,” IFAC-PapersOnLine, vol. 48, no. 21, pp. 416 – 421, 2015. counterparts. Namely, we introduce new notions of stability, [18] G. Pangalos, A. Eichler, and G. Lichtenberg, “Hybrid multilinear mod- controllability, and state-feedback of multilinear dynamical eling and applications,” in Simulation and Modeling Methodologies, systems. Technologies and Applications - International Conference, SIMULTECH 2013 Reykjav´ık, Iceland, July 29-31, 2013 Revised Selected Papers, We note that using the above definitions for decomposing 2013, pp. 71–85. third order tensors is still very immature. While there are [19] E. Sewe, G. Pangalos, and G. Lichtenberg, “Approaches to fault detec- many different research directions to investigate within this tion for heating systems using CP tensor decompositions,” in Simulation and Modeling Methodologies, Technologies and Applications - 7th framework, our immediate focus will be on developing a International Conference, SIMULTECH 2017, 2017, pp. 128–152. notion of a characteristic polynomial for tensors which leads [20] G. Pangalos, A. Eichler, and G. Lichtenberg, “Tensor systems - multilin- to defining what it means for a tensor to be in companion ear modeling and applications,” in SIMULTECH 2013 - Proceedings of the 3rd International Conference on Simulation and Modeling Method- form. Moreover, we wish to investigate the observability ologies, Technologies and Applications, Reykjav´ık, Iceland, 29-31 July, conditions generally present in linear systems and define a 2013, 2013, pp. 275–285. framework for multilinear observability as well as multilinear [21] S. Pfeiffer, G. Lichtenberg, C. Schmidt, and H. Schlarb, “Tensor techniques for iterative learning control of a free-electron laser,” in Pro- state estimation. Finally, we wish to explore application areas ceedings of the IEEE International Conference on Control Applications, for such a framework that may arise the real-world problems. CCA 2012, Dubrovnik, Croatia, October 3-5, 2012, 2012, pp. 160–165. [22] S. Baier and V. Tresp, “Tensor decompositions for modeling inverse REFERENCES dynamics,” IFAC-PapersOnLine, vol. 50, no. 1, pp. 5630 – 5635, 2017. [23] F. di Sciascio and R. Carelli, “Fuzzy modelling and identification of mul- [1] L. R. Tucker, “Some mathematical notes on three-mode factor analysis,” tilinear dynamical systems,” in Proceedings of IEEE 5th International Psychometrika, vol. 31, no. 3, pp. 279–311, Sept. 1966. Fuzzy Systems, vol. 2, Sep. 1996, pp. 848–854 vol.2. [2] R. A. Harshman, “Foundations of the PARFAC procedure: Models and [24] M. Rogers, L. Li, and S. J. Russell, “Multilinear dynamical systems conditions for an “explanatory” multimodal factor analysis,” University for tensor time series,” in Advances in Neural Information Processing of California Los Angelas, Tech. Rep. 10,085, December 1970. Systems 26, C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and [3] L. D. Lathauwer, B. D. Moor, and J. Vandewalle, “A multilinear singular K. Q. Weinberger, Eds., 2013, pp. 2634–2642. value decomposition,” SIAM J. Matrix Anal. Appl., vol. 21, no. 4, pp. [25] P. Gelß, S. Klus, J. Eisert, and C. Schutte, “Multidimensional approxi- 1253–1278, March 2000. mation of nonlinear dynamical systems.” 2018. [4] T. G. Kolda and B. W. Bader, “Tensor decompositions and applications,” [26] W. Lu, X.-Y. Liu, Q. Wu, Y. Sun, and A. Elwalid, “Transform-based SIAM Review, vol. 51, no. 3, pp. 455–500, Aug. 2009. multilinear dynamical system for tensor time series analysis,” ArXiv, [5] M. Alex, O. Vasilescu, and D. Terzopoulos, “Multilinear analysis of vol. abs/1811.07342, 2018. image ensembles: Tensorfaces,” in European Conf. on Comp. Vis., [27] M. E. Kilmer, C. D. Martin, and L. Perrone, “A third-order generalization Copenhagen, Denmark, May 2002, pp. 447 – 460. of the matrix SVD as a product of third-order tensors,” Tufts University, [6] O. Vasilescu and D. Terzopoulos, “Multilinear projection for appearance- Department of Computer Science, Tech. Rep. TR-2008-4, October 2008. based recognition in the tensor framework,” in Int. Conf. on Comp. Vis., [28] M. E. Kilmer and C. D. Moravitz Martin, “Factorization strategies for 2007, pp. 1–8. third-order tensors,” Linear Algebra and Its Applications, no. Special th [7] O. Vasilescu and D. Terzopoulos, “Multilinear subspace analysis of Issue in Honer of G.W.Stewart’s 75 birthday, 2009. image ensembles,” in Int. Conf. on Comp. Vis. and Patt. Rec., 2003, [29] K. Braman, “Third-order tensors as linear operators on a space of pp. 93–99. matrices,” Linear Algebra and its Applications, vol. 433, no. 7, pp. 1241 [8] H. Lu, K. N. Plataniotis, and A. N. Venetsanopoulos, “A survey of – 1253, 2010. multilinear subspace learning for tensor data,” Pattern Recognition, [30] M. Kilmer, K. Braman, and N. Hao, “Third order tensors as operators on vol. 44, no. 7, pp. 1540–1551, July 2011. matrices: A theoretical and computational framework,” Tufts University, [9] R. C. Hoover, A. A. Maciejewski, and R. G. Roberts, “Fast eigenspace Department of Computer Science, Tech. Rep. TR-2011-01, January decomposition of images of objects with variation in illumination and 2011. pose,” IEEE Tran. Sys. Man, Cyber. B: Cybernetics, vol. PP, no. 99, pp. [31] D. F. Gleich, C. Greif, and J. M. Varah, “The power and arnoldi methods 1–12, Aug. 2010. in an algebra of circulants,” arXiv, vol. 1101.2173v1, 2011. [10] R. C. Hoover, K. S. Braman, and N. Hao, “Pose estimation from a single [32] K. Lund, “The tensor t-function: A definition for functions of third-order image using tensor decomposition and an algebra of circulants,” in Int. tensors,” arXiv:1806.07261v1, 2018. Conf. on Intel. Robots and Sys., 2011. [33] K. Lund, “A new block krylov subspace framework with applications [11] N. Hao, M. E. Kilmer, K. S. Braman, and R. C. Hoover, “New tensor to functions of matrices acting on multiple vectors,” Ph.D. dissertation, decompositions with applications in facial recognition,” SIAM Journal Temple University, 2018. on Imaging Science (SIIMS), vol. 6, no. 1, pp. 437–463, Feb. 2013. [34] Y. Miao, L. Qi, and Y. Wei, “T-jordan canonical form and t-drazin [12] M. E. Kilmer, K. S. Braman, N. Hao, and R. C. Hoover, “Third inverse based on the t-product,” arXiv:1902.07024, 2019. order tensors as operators on matrices: A theoretical and computational framework with applications in imaging,” SIAM Journal on Matrix Analysis and Applications (SIMAX), vol. 34, no. 1, pp. 148–172, Feb. 2013. [13] J. Sun, D. Tao, and C. Faloutsos, “Beyond streams and graphs: Dynamic tensor analysis,” in Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ser. KDD ’06, 2006, pp. 374–383. [14] W. Lu, X. Liu, Q. Wu, Y. Sun, and A. Walid, “Transform-based multilin- ear dynamical system for tensor time series analysis,” arXiv:1811.07342, 2018. [15] K. Kruppa and G. Lichtenberg, “Feedback linearization of multilinear time-invariant systems using tensor decomposition methods,” in SIMUL- TECH, 2018. [16] K. Kruppa, G. Pangalos, and G. Lichtenberg, “Multilinear approximation of nonlinear state space models,” IFAC Proceedings Volumes, vol. 47, no. 3, pp. 9474 – 9479, 2014.