arXiv:1211.3796v1 [math.NA] 16 Nov 2012 aa,emi:(phan,cia)@brain.riken.jp. e-mail: Japan, time-varyin [8], data fMRI of s analysis applications [7], [6], found munication has which factorization common oebr1,2012 19, November ihodrTnosTruhTno Reshaping Tensor Through High-order h oko .Tcas´ a upre yGatAec fthe of Agency Grant by supported Tichavsk´y was P. Au of and work Theory The Information of Warsaw, Institute Institute, with Research Tichavsk´y is System P. with also is Cichocki A. S Brain Advanced for Lab the with are Cichocki A. and Phan H. A. CANDECOMP CANDECOMP rme-a nue on CI) rme-a oe bound Cram´er-Rao lower (CRIB), bound Cram´er-Rao induced paper. s the addition, in an In verified generated, CPD. practically structured then the is for tensor algorithms data fast es the using order-3 as the of same the On of tensor. order-3 e.g., tensor, order, data a lower order propose high we the tensor, factorizing unfolded directly of its Instead and form exploi Kruskal by in paper, tensor this a In tensors. scale they large relatively and and demanding, computationally exte become be algorithms can and the implement to easily are (CPD), decomposition ngnrl loihsfrodr3CANDECOMP order-3 for algorithms general, In esrfcoiain aoia eopsto,PARAFAC, decomposition, canonical factorization, Tensor / AAA 1,[] lokona aoia oydcdecompos polyadic Canonical as known also [2], [1], PARAFAC n u Phan Huy Anh / A AA eopsto of Decomposition PARAFAC ∗ erTcas´ n nre Cichocki Andrzej Tichavsk´y and Petr , .I I. ne Terms Index ntroduction Abstract oain rge zc eulc mi:[email protected] email: Republic, Czech Prague, tomation, Poland. / AAA C) locie aoia polyadic canonical coined also (CP), PARAFAC h ehddcmoe nufle esrwith tensor unfolded an decomposes method the ga rcsig ri cec nttt,RKN Wakoshi RIKEN, Institute, Science Brain Processing, ignal zc eulc102 Republic Czech igteuiuns fCDadterlto of relation the and CPD of uniqueness the ting rtge oufl esr r ugse and suggested are tensors unfold to trategies E pcrm[] 1] aamnn [11], mining data [10], [9], spectrum EEG g iae esr tutrdKukltensor Kruskal structured a tensor, timated ddt ihrodrCD Unfortunately, CPD. order higher to nded c si hmmtis[][] telecom- [3]–[5], chemometrics in as uch r fe o plcbet ihrorder higher to applicable not often are (CRLB) L,srcue P,tno unfolding, tensor CPD, structured ALS, eopsdt n h nlsolution final the find to decomposed d atapoc oda ihti problem. this with deal to approach fast / 09 / 1278 to CD,i a is (CPD), ition DRAFT as.cz. 1 , 2

[12], separated representations for generic functions involved in quantum mechanics or kinetic theory descriptions of materials [13], classification, clustering [14], compression [15]–[17]. Although the original decomposition and applications were developed for three-way data, the model was later widely extended to higher order tensors. For example, P. G. Constantine et al. [18] modeled the pressure measurements along the combustion chamber as order-6 tensors corresponding to the flight conditions - Mach number, altitude and angle of attack, and the wall temperatures in the combustor and the turbulence mode. Hackbusch, Khoromskij, and Tyrtyshnikov [19] and Hackbusch and Khoromskij [20] investigated CP approximation to operators and functions in high dimensions. Oseledets and Tyrtyshnikov [21] approximated the Laplace operator and the general second-order operator which appears in the Black-Scholes equation for multi- asset modeling to tackle the dimensions up to N = 200. In neuroscience, M. Mørup et al. [9] analyzed order-4 data constructed from EEG signals in the time-frequency domain. Order-5 tensors consisting of dictionaries × timeframes × frequency bins × channels × trials-subjects [22] built up from EEG signals were shown to give high performance in BCI based on EEG motor imagery. In object recognition (digits, faces, natural images), CPD was used to extract features from order-5 Gabor tensors including hight × width × orientation × scale × images [22]. In general, many CP algorithms for order-3 tensor can be straightforwardly extended to decompose higher order tensors. For example, there are numerous algorithms for CPD including the alternating least squares (ALS) algorithm [1], [2] with line search extrapolation methods [1], [5], [23]–[25], rotation [26] and compression [27], or all-at-once algorithms such as the OPT algorithm [28], the conjugate gradient algorithm for nonnegative CP, the PMF3, damped Gauss-Newton (dGN) algorithms [5], [29] and fast dGN [30]–[32], or algorithms based on joint diagonalization problem [33]–[35]. The fact is that the algorithms become more complicated, computationally demanding, and often not applicable to relatively large scale tensors. For example, complexity of gradients of the cost function with respect to factors grows linearly N with the number of dimensions N. It has a computational cost of order O NR I for a tensor of size  n  Yn=1  × ×···× = −   I1 I2 IN. More tensor unfoldings Y(n) (n 2, 3,..., N 1) means more time consuming due to accessing non-contiguous blocks of data entries and shuffling their orders in a computer. In addition, line search extrapolation methods [1], [4], [5], [23], [24], [36] become more complicated, and demand high computational cost to build up and solve (2N − 1)- order polynomials. The rotation method [26] needs to estimate N rotation matrices of size R × R with a whole complexity per iteration of order O(N3R6). Recently, a Cram´er-Rao Induced Bounds (CRIB) on attainable squared angular error of factors in the CP decomposition has been proposed in [37]. The bound is valid under the assumption that the

November 19, 2012 DRAFT 3 decomposed tensor is corrupted by additive Gaussian noise which is independently added to each tensor element. In this paper we use the results of [37] to design the tensor unfolding strategy which ensures as little deterioration of accuracy as possible. This strategy is then verified in the simulations. By exploiting the uniqueness of CPD under mild conditions and the relation of a tensor in the Kruskal form [38] and its unfolded tensor, we propose a fast approach for high order and relatively large-scale CPD. Instead of directly factorizing the high order data tensor, the approach decomposes an unfolded tensor in lower order, e.g., order-3 tensor. A structured Kruskal tensor of the same dimension of the data tensor is then generated, and decomposed to find the desired factor matrices. We also proposed the fast ALS algorithm to factorize the structured Kruskal tensor. The paper is organized as follows. Notation and the CANDECOMP/PARAFAC are briefly reviewed in Section II. The simplified version of the proposed algorithm is presented in Section III. Loss of accuracy is investigated in Section III-A, and an efficient strategy for tensor unfolding is summarized in Section III-B. For difficult scenario decomposition, we proposed a new algorithm in Section IV. Simulations are performed on random tensors and real-world dataset in Section V. Section VI concludes the paper.

II. CANDECOMP/PARAFAC (CP) decomposition

Throughout the paper, we shall denote tensors by bold calligraphic letters, e.g., A ∈ RI1×I2×···×IN ,

I×R matrices by bold capital letters, e.g., A =[a1, a2,..., aR] ∈ R , and vectors by bold italic letters, e.g., a j or I = [I1, I2,..., IN]. A vector of integer numbers is denoted by colon notation such as k = i: j = [i, i + 1,..., j − 1, j]. For example, we denote 1:n = [1, 2,..., n]. The Kronecker product, the Khatri-Rao (column-wise Kronecker) product, and the (element-wise) Hadamard product are denoted respectively by ⊗, ⊙, ⊛ [38], [39].

Definition 2.1: (Kruskal form (tensor) [38], [40]) A tensor X ∈ RI1×I2×···×IN is in Kruskal form if R (1) (2) (N) X = λr ar ◦ ar ◦···◦ ar , (1) Xr=1 △ (1) (2) (N) = ~λ; A , A ,..., A , λ = [λ1, λ2,...,λR]. (2)

(n) (n) (n) (n) RIn×R where symbol “◦” denotes the outer product, A = [a1 , a2 ,..., aR ] ∈ , (n = 1, 2,..., N) are (n)T (n) factor matrices, ar ar = 1, for all r and n, and λ1 ≥ λ2 ≥···≥ λR > 0. Definition 2.2: (CANDECOMP/PARAFAC (CP) [1], [2], [40], [41] ) Approximation of order-N data tensor Y ∈ RI1×I2×···×IN by a rank-R tensor in the Kruskal form means

Y = bY + E, (3)

November 19, 2012 DRAFT 4

TABLE I N N Complexities per iteration of major computations in CPD algorithms. J = n=1 In, T = n=1 In. Q P

Computing Process Complexity Gradient [1], [2] O (NRJ) Fast gradient [42] O (RJ) (Approximate) Hessian and its inverse [5], [29] O R3T 3   Fast (approximate) Hessian and its inverse [31], [37] O R2T + N3R6   Exact line search [1], [4], [5] O 2N RJ   Rotation [26] O N3R6  

b λ (1) (2) (N) b 2 where Y = ~ ; A , A ,..., A , so that kY − YkF is minimized. There are numerous algorithms for CPD including alternating least squares (ALS) or all-at-once optimization algorithms, or based on joint diagonalization. In general, most CP algorithms which factorize order-N tensor often face high computational cost due to computing gradients and (approximate) Hessian, line search and rotation. Table I summarizes complexities of major computations in popular CPD algo- rithms. Complexity per iteration of a CP algorithm can be roughly computed based on Table I. For exam- ple, the ALS algorithm with line search has a complexity of order O(NRJ+2NRJ+NR3) = O(2NRJ+NR3).

III. CPD of unfolded tensors

In order to deal with existing problems for high order and relatively large scale CPD, the following process is proposed:

1) Reduce the number of dimensions of the tensor Y to a lower order (e.g., order-3) through tensor

unfolding Y~l which is defined later in this section. b 2) Approximate the unfolded tensor Y~l by an order-3 tensor Y~l in the Kruskal form. Dimensions

of Y~l which are relatively larger than rank R can be reduced to R by the Tucker compression [43]–[46] prior to CPD although it is not a lossless compression. In such case, we only need to decompose an R × R × R dimensional tensor. b 3) Estimate the desired components of the original tensor Y on basis of the tensor Y~l in the Kruskal form.

The method is based on an observation that unfolding of a Kruskal tensor also yields a Kruskal tensor. Moreover due to uniqueness of CPD under “mild” conditions, the estimated components along the unfolded modes are often good approximates to components for the full tensor. In the sequel, we

November 19, 2012 DRAFT 5 introduce basic concepts that will be used in the rest of this paper. Loss of accuracy in decomposition of the unfolded tensors is analyzed theoretically based on the CRIB.

Definition 3.1 (Reshaping): The reshape operator for a tensor Y ∈ RI1×I2×···×IN to a size specified M N by a vector L = [L1, L2,..., LM] with m=1 Lm = n=1 In returns an order-M tensor X, such that vec(Y) = vec(X), and is expressed as X =Qreshape(YQ, L) ∈ RL1×L2×···×LM .

Definition 3.2 (Tensor transposition [47]): If A ∈ RI1×···×IN and p is a permutation of [1, 2,..., N],

I ×···×I then A ∈ R p1 pN denotes the p-transpose of A and is defined by

A (ip1 ,..., ipN ) = A(i1,..., iN), 1 ≤ i ≤ I = [I1, I2,..., IN]. (4)

Definition 3.3 (Generalized tensor unfolding): Reshaping a p-transpose Y

to an order-M ten- sor of size L = [L1, L2,..., LM] with Lm = Ik, where [l1, l2,..., lM] ≡ [p1, p2,..., pN], lm =

Yk∈lm [lm(1),..., lm(Km)]

Y~l = reshape(Y , L), l = [l1, l2,..., lM]. (5)

Remark 3.1:

1) If l = [n, (1:n − 1, n + 1:N)], then Y~l = Y(n) is mode-n unfolding.

2) If Y is an order-4 tensor, then Y~1,2,(3,4) is an order-3 tensor of size I1 × I2 × I3I4.

3) If Y is an order-6 tensor, then Y~(1,4),(2,5),(3,6) is an order-3 tensor of dimension I1I4 × I2I5 × I3I6. N △ We denote Khatri-Rao product of a set of matrices U(n), n = 1, 2,..., N, as ⊙ U(n) = U(N) ⊙ U(N−1) ⊙ n=1 ···⊙ U(1). Lemma 3.1: Unfolding of a rank-R tensor in the Kruskal form Y = ~λ; A(1), A(2),..., A(N) returns an order-M rank-R Kruskal tensor Y~l, l = [l1, l2,..., lM], given by

(1) (2) (M) Y~l = ~λ; B , B ,..., B , (6)

(m) (k) ( k∈l Ik)×R where B = ⊙k∈l A ∈ R m (m = 1, 2,..., M) are merging factor matrices. m Q Remark 3.2: T (n) (k) 1) If l = [n, (1:n − 1, n + 1:N)], then Y~l = Y(n) = A diag(λ) ⊙k,n A . T n (k) λ N (k)  2) If l = [(1:n), (n + 1:N)], then Y~l = ⊙k=1 A diag( ) ⊙k=n+1 A .     (1) (2) (4) (3) 3) For an order-4 Kruskal tensor Y, Y~1,2,(3,4) = ~λ; A , A , A ⊙ A .

Corollary 3.1: An order-K tensor Bmr of size Ilm1 × Ilm2 ×···× IlmK , lm = [lm1 ... lmK] folded from the (m) (m) (m) r-th column vector br of B , i.e., vec Bmr = br is a rank-1 tensor  (lm1) (lm2) (lmK ) Bmr = ar ◦ ar ◦···◦ a . (7)

November 19, 2012 DRAFT 6

Algorithm 1: rank-one FCP

Input: Data tensor Y: (I1 × I2 ×···× IN), rank R, Unfolding rule l = [l1, l2,..., lM] where lm = [lm(1),..., lm(Km)] Output: λ ∈ RN, N matrices A(n) ∈ RIn×R begin % Stage 1: Tensor unfolding and optional compression ------(1) (M) 1 ~G, U ,..., U  = TD(Y~l, min(I, R)) % Tucker decomposition of order-M Y~l

% Stage 2: CPD of the unfolded (and reduced) tensor ------2 ~λ; B(1),..., B(M) = CPD(G, R) % order-M CPD of the core tensor 3 for m = 1, 2,..., M do B(m) ← U(m) B(m) % Back projection of TD

% Stage 3: Rank-one approximation to merging components ------4 for m = 1, 2,..., M do 5 for r = 1, 2,..., R do (lm1) (lmK ) (m) 6 ~g; ar ,..., ar  = TD(reshape(br , [Ilm(1),..., Ilm(Km)]), 1) 7 λr ← λr g

TD(Y, R): rank-R Tucker decomposition of order-N tensor Y where R = [R1, R2,..., RN]. b Y = CPD (Y, R, Yinit): approximates an order-N tensor or a tensor in the Kruskal form Y by a rank-R b Kruskal tensor Y using initial values Yinit.

In practice for real data, folded tensors Bmr are not exact rank-1 tensors but can be approximated by rank-1 tensors composed from components corresponding modes in lm. In other words, computing the B (lmk ) leading-left singular vector of the mode-k unfolding mr (k) is the simplest approach to recover ar (m)   from br for k = 1, 2,..., K. Pseudo-code of this simple algorithm for unFolding CPD (FCP) is described in Algorithm 1. The more complex and efficient algorithm is discussed later.

A. Selecting an unfolding strategy

For (noiseless) tensors which have exact rank-R CP decompositions without (nearly) collinear compo- nents, factors computed from unfolded tensors can be close to the true solutions. However, for real data tensor, there exists loss of accuracy when using the rank-one approximation approach. The loss can be affected by the unfolding, or by the rank-R of the decomposition, especially when R is under the true rank of the data tensor. This section analyzes such loss based on comparing CRIBs on the first component (1) (1) a1 of CPDs of the full tensor and its unfolded version. We use a1 a shorthand notation for a1 . The results of this section give us an insight into how to unfold a tensor without or possibly minimal loss of accuracy. The accuracy loss in decomposition of unfolded tensor is defined as the loss of CRIB [37], [48], [49] on

November 19, 2012 DRAFT 7 components of the unfolded tensor through the unfolding rule l compared with CRIB on components of (n)T (n) the original tensor. For simplicity, we consider tensors in the Kruskal form (2.1) which have ar as = cn, (n)T (n) for all n, r , s, and ar ar = 1, −1 ≤ cn ≤ 1. Coefficients cn are called degree of collinearity.

1) Loss in unfolding order-4 tensors: For order-4 tensors, since CRIB(a1) is largely independent of σ2 c1 unless c1 is close to ±1 [37], we consider the case when c1 = 0. Put h = c2c3c4 and θ = 2 . From λ1 [37] and Appendix B, we get CRIBs for rank-2 CPD in explicit forms: θ c2c2 + c2c2 + c2c2 − 3h2 CRIB(a ) = I − 1 + 2 3 2 4 3 4 , (8) 1 − 2 1 2 2 2 2 2 2 2 1 h  1 + 2h − c2c3 − c2c4 − c3c4    θ  1 1  CRIB (a ) = I − 3 + + , (9) ~1,2,(3,4) 1 − 2  1 2 2 2  1 h 1 − c2 1 − c3c4   θ  1 1  CRIB~1,(2,3),4)(a1) = I1 − 3 + + . (10) 1 − h2  1 − c2c2 1 − c2   2 3 4    In general, CRIB(a1) ≤ CRIB~1,2,(3,4)(a1). The equality is achieved for c2 = 0. 1 CRIB(a ) = CRIB (a ) = θ I − 2 + . (11) 1 c2=0 ~1,2,(3,4) 1 c2=0 1 2 2  1 − c3c4    It means that if modes 1 and 2 comprise (nearly) orthogonal components, the tensor unfolding [1, 2, (3, 4)] does not affect the accuracy of the decomposition. 2 2 From (9) and (10), it is obvious that CRIB~1,2,(3,4)(a1) ≤ CRIB~1,(2,3),4)(a1) if c2 ≤ c4. This indicates that collinearities of modes to be unfolded should be higher than those of other modes in order to reduce the loss of accuracy in estimating a1. Note that the new factor matrices yielded through tensor unfolding have lower collinearity than those of original ones. Moreover, tensors with high collinear components are always more difficult to decompose than ones with lower collinearity [29], [50], [51]. Hence, it is natural to unfold modes with highest collinearity so that the CPD becomes easier. This rule also holds for higher rank R, and is illustrated in a particular case when c1 = c3 = 0 R − 1 = θ − + , CRIB~1,2,(3,4)(a1) I1 R 2 (12)  1 − c2     R − 1  CRIB~1,(2,3),4(a1) = θ I1 − R + . (13)  1 − c2   4    The unfolding [1, 2, (3, 4)] is more efficient than the unfolding [1, (2, 3),4] when |c2| < |c4|, although this unfolding still causes some loss of CRIB despite of c4 since R − 1 R − 1 CRIB(a1) = θ I1 − R + ≤ θ I1 − R + = CRIB~1,2,(3,4)(a1), for all c2.  1 − c2c2   1 − c2   2 4   2      Moreover the loss is significant when c4 is small enough. Note that for this case, the unfolding [1, 3, (2, 4)] is suggested because it does not cause any loss according to the previous rule.

November 19, 2012 DRAFT 8

In other words, modes which comprise orthogonal or low collinear components (i.e., cn ≈ 0) should not fold with the other modes, unless the other modes have nearly orthogonal columns as well.

Example 1 We illustrate the similar behavior of CRIB over unfolding but for higher-order ranks. We decomposed Y~1,2,(3,4) unfolded from rank-R tensors of size R×R×R×R with R = 3, 5,..., 30, corrupted with additive Gaussian noise of 10 dB SNR. There was not any significant loss in factors when modes 1 and 2 comprised low-collinear components despite of collinearity in modes 3 and 4 as seen in Figs. 1(a)-

1(c). For all the other cases of (c1, c2, c3, c4), there were always significant losses, especially when all the factors comprised highly collinear components (i.e., cn is close to ±1) as seen in Figs. 1(d)-1(f).

When the first mode has nearly collinear factors, i.e., c1 is close to ±1, we have [37]

θ(I1 − 1) CRIB(a ) = = < CRIB(a ) = , (14) 1 c1 ±1 1 − h2 1 c1 0 but the expressions for the folded tensor decomposition remain unchanged. It means that the loss occurs as seen in Fig. 1(d) and Fig. 1(f).

2) Loss in unfolding order-5 tensors: For order-5 rank-2 tensors, we consider the case when c1 = 0, and put h = c2c3c4c5. CRIBs of decompositions of the full and unfolded tensors are given by θ ζ − 4h2 CRIB(a1) = I1 − 1 + , (15) 1 − h2 " 1 + 3h2 − ζ # θ 1 1 CRIB~ (a ) = I − 3 + + , (16) 1,2,(3,4,5) 1 1 − h2  1 2 2 2 2  1 − c2 1 − c3c4c5   θ  1 1  CRIB (a ) = I − 3 + + , (17) ~1,(2,3),(4,5) 1 − 2  1 2 2 2 2  1 h 1 − c2c3 1 − c4c5   2 2 2 2 2 2 2 2 2 2 2 2   where ζ = c2c3c4 + c2c3c5 + c2c4c5 + c3c4c5. From (16) and (17), it is obvious that CRIB~1,2,(3,4,5)(a1) ≤ 2 2 2 CRIB~1,(2,3),(4,5)(a1) if c2 ≤ c4c5. This rule coincides with that for order-4 tensors to reduce the collinearity of the merging factor matrices as much as possible. For c2 = 0, the expressions (15) and (16) become identical, but expression (17) is larger, in general. For higher order tensors, we analyze the CRIB loss in decomposition of order-6 tensors with assumption that c1 = c2 = ··· = c6 = c through 5 unfoldings l1 = [1, 2, 3, 4, (5, 6)], l2 = [1, 2, 3, (4, 5, 6)], l3 =

November 19, 2012 DRAFT 9

40 40

38 38

36 36

34 34 32 32 30 30

SAE (dB) SAE (dB) 28 28 Factor−1 Factor−1 Factor−2 26 Factor−2 26 Factor−3 Factor−3 24 24 Factor−4 Factor−4 CRIB 22 CRIB 22 20

3 5 10 15 20 30 3 5 10 15 20 30 I I

(a) (c1, c2, c3, c4) = (0.1,0.1,0.1,0.1). (b) (c1, c2, c3, c4) = (0.1,0.1,0.9,0.1).

40 40

35 35

30 30

25 SAE (dB) SAE (dB) 25 Factor−1 Factor−1 Factor−2 Factor−2 Factor−3 20 Factor−3 20 Factor−4 Factor−4 CRIB CRIB 15

3 5 10 15 20 30 3 5 10 15 20 30 I I

(c) (c1, c2, c3, c4) = (0.1,0.1,0.9,0.9). (d) (c1, c2, c3, c4) = (0.1,0.9,0.1,0.9).

35 40

38 30

36 25 34 20 32 SAE (dB) SAE (dB) Factor−1 Factor−1 15 Factor−2 30 Factor−2 Factor−3 Factor−3 Factor−4 Factor−4 28 10 CRIB CRIB

26 3 5 10 15 20 30 3 5 10 15 20 30 I I

(e) (c1, c2, c3, c4) = (0.1,0.9,0.9,0.9). (f) (c1, c2, c3, c4) = (0.9,0.9,0.9,0.9), SNR = 30 dB. Fig. 1. Median SAE of all components for factors over 30 Monte Carlo runs vs CRIB in decomposition of order-4 tensors of size In = R = 3, 5, 10, . . . , 30, for all n through the unfolding rule l = [1, 2, (3, 4)]. Correlation coefficients cn have been chosen from the set ∈{0.1, 0.9}, for all n. The signal to white Gaussian noise power ratio (SNR) was at 10dB or 30dB.

November 19, 2012 DRAFT 10

[1, 2, (3, 4, 5, 6)], l4 = [1, (2, 3), (4, 5, 6)] and l5 = [1, 2, (3, 4), (5, 6)] 8 6 4 2 I1 − 1 5c (4c + 3c + 2c + 1) CRIB(a1) = θ + , "1 − c10 (1 − c10)(1 − c8)(1 + 3c2 + c4)(1 + c2 + 6c4 + c6 + c8)# 6 8 6 4 2 I1 − 1 c (6c + 11c + 7c + 5c + 1) CRIBl (a1) = θ + , 1 "1 − c10 (1 − c10)(1 − c8)(1 + c2)(1 + 2c2 + 6c4 + 2c6 + c8)# 4 6 4 2 I1 − 1 c (4c + 3c + 2c + 1) CRIBl (a1) = θ + , 2 "1 − c10 (1 − c10)(1 − c8)(1 + 3c2 + c4)# 2 6 4 2 I1 − 1 c (2c + c + c + 1) CRIBl (a1) = θ + , 3 "1 − c10 (1 − c10)(1 − c8) # 4 4 4 2 I1 − 1 c (1 + c )(2c + 2c + 1) CRIBl (a1) = θ + , 4 "1 − c10 (1 − c10)(1 − c8)(1 + c2 + c4)(1 − c + c2)# 6 6 4 2 6 4 2 I1 − 1 c (2c + 2c + 2c + 1)(c + 4c + 3c + 2) CRIBl (a1) = θ + . 5 "1 − c10 (1 − c10)(1 − c8)(1 + c2 + c4)(c8 + 3c6 + 6c4 + 3c2 + 1)#

It holds CRIB < CRIBl1 < CRIBl5 < CRIBl2 < CRIBl4 < CRIBl3 (see Fig. 2(a) for θ = 1 and I1 = R).

CRIB(a1) Fig. 2(b) illustrates such behavior of the CRIB losses L = −10log10 (dB) but for higher rank CRIBl(a1)  R = 20 and the tensor size In = R = 20, for all n. The CRIB loss is insignificant when components are nearly orthogonal (i.e., c → 0), and is relevant for highly collinear components, i.e., c → 1. Unfolding l1 causes a CRIB loss less than 1 dB, while unfolding l2, l4 and l3 can cause a loss of 3, 5 and 7 dB, respectively. The result confirms that two-mode unfolding causes a lesser CRIB loss than other rules. The unfoldings l4 and l5 are more efficient than multimode unfoldings l2 and l3, respectively in decomposition of unfolded tensors of the same orders. 3) A case when two factor matrices have orthogonal columns: As pointed out in (11) that there is not any loss of accuracy in estimation of A(1) and A(2) through unfolding when these two factor matrices have mutually orthogonal columns. The result also holds for arbitrary order-N rank-R tensors which have orthogonal components on two modes. In such case, the analytical CRIB is given by Theorem 3.1 ( [37]): When A(1) and A(2) have mutually orthogonal columns, it holds

2 R N σ 1 (n) T (n) CRIB(a1) = I1 − R + , γr = a a , r = 2, 3,..., R. (18) λ2  1 − γ2  1 r 1  Xr=2 r  Yn=3       (1) (2) It is obvious that CRIB~1,2,(3:N)(a1) = CRIB(a1). Hence, estimation of A and A through unfolding is lossless in terms of accuracy. An important observation from Theorem 3.1 is that all the factor matrices in CPD with orthogonality constraints [52], [53] can be estimated through order-3 tensors unfolded from the original data without any loss of accuracy. That is an algorithm for order-3 orthogonally constrained CPD on two or three modes can work for arbitrary order-N tensors.

November 19, 2012 DRAFT 11

7 0 [1:4,(5,6)], 5−D [1:3,(4,5,6)], 4−D −1 6 1,2,(3,4),(5,6), 4−D [1,2,(3:6)], 3−D −2 [1,(2,3),(4:6)], 3−D 5 −3

−4 4 CRIB (dB) CRIB (dB) −5 10 10 3 −6 CRIB

CRIBl −10 log

−10log 1 −7 2 CRIBl2

−8 CRIBl3

CRIBl4 1 −9 CRIBl5

−10 0 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 c c

(a) CRIB (dB) for order-6 rank-2 tensors. (b) CRIB loss for rank-20 tensors.

Fig. 2. The CRIB loss in decomposition of order-6 rank-R tensors with size In = R and correlation coefficients cn = c, for all n following 5 unfolding rules. The CRIB loss is significant when components are collinear, i.e., c → 1. Unfolding l = [1:4, (5, 6)] causes a lesser CRIB loss than other rules. The unfolding l = [1, 2, (3, 4), (5, 6)] is more efficient than multimode unfolding rule.

B. Unfolding strategy

Based on the above analysis of CRIB loss, we summarize and suggest an efficient unfolding strategy to reduce the loss of accuracy. Without loss of generality, assume that 0 ≤ |c1| ≤ |c2|≤···≤|cN| ≤ 1, the following procedures should be carried out when unfolding an order-N tensor to order-M (typically, M = 3)

• Unfold two modes which correspond to the two largest values |cn|, i.e., (N − 1, N). This yields a new

factor matrix with a correlation coefficientc ˜N−1 = cN−1cN. The tensor order is reduced by one.

• Unfold two modes which correspond to the two largest collinearity values among (N − 1) values

[c1, c2,..., cN−2, c˜N−1]. This can be (N − 3, N − 2) if |c˜N−1| < |cN2 |; otherwise, (N − 2, N − 1, N). The

new correlation coefficient is cN−3cN−2 or cN−2cN−1cN.

• Unfold the tensor until its order is M.

In addition, (nearly) orthogonal modes should not be merged in the same group. For order-4 tensor, the unfolding [1,2,(3,4)] is recommended.

Example 2 We decomposed order-5 tensors with size In = R = 10 and additive Gaussian noise of 10 dB SNR. Correlation coefficients of factors matrices were [0.1, 0.7, 0.7, 0.7, 0.8]. Three tensor unfoldings l1 = [(1, 4, 5), 2, 3], l2 = [1, 2, (3, 4, 5)] and l3 = [1, (2, 3), (4, 5)] were applied to the order-5

November 19, 2012 DRAFT 12

tensors. Unfolding l1 = [(1, 4, 5), 2, 3] caused the largest loss of 2 dB with an average MSAE = 36.62 dB illustrated in Fig. III-C. The recommended unfolding l3 according to the above strategy achieved an average MSAE = 38.29 dB compared with the average CRIB = 38.67 dB on all the components.

Example 3 Unfolding tensors with the same colinearity in all modes. We verified the unfolding rules for order-6 tensors with simplified assumption that c1 = c2 = ··· = c6 = c. Since correlation coefficients are identical, the unfoldings l = [1, 2, (3, 4), (5, 6)] is one of the best rules. Fig. 2(b) shows that the CRIB loss by l = [1, 2, 3, (4, 5, 6)] was higher than the loss by l = [1, 2, (3, 4), (5, 6)].

C. Unfolding without collinearity information

For real-world data, although collinearity degrees of factor matrices are unknown, the above strategy is still applicable. Since the decomposition through tensor unfolding decomposes only an order-3 tensor, the computation is relatively fast. We can try any tensor unfolding, and verify the (average) collinearity (n)T (n) r,s |ar as | degrees of the estimated factors cn = , (n = 1, 2,..., N), to proceed a further decomposition P R(R − 1) with a new tensor unfolding.

Example 2(b) We replicated simulations in Example 2, but assumed that there was not prior collinearity information and the bad unfolding rule l1 = [(1, 4, 5), 2, 3] was applied. The average collinear degrees of the estimated factor matrices cn = 0.0989, 0.7007, 0.6992, 0.7021, 0.8014, for n = 1,..., N, respectively, indicated that the unfolding [(1, 4, 5), 2, 3] is not a good one. The unfolding l3 = [1, (2, 3), (4, 5)] was then suggested, and a further decomposition was proceeded. This improved the MSAE about 2dB. For more examples, see decomposition of the ITPC tensor in Example 5 when R = 8.

IV. Fast Approximation for High Order and Difficult scenario CPD

An appropriate unfolding rule can reduce the loss of the decomposition. However, the loss always occurs when factors have high collinearity or unfolding orthogonal modes. Moreover, in practice, a rank- R CPD may not fit the data tensor. This could happen when R is not exactly the true rank of the tensor. Especially, for under-rank CPD, the error tensor E can still be explained by tensors in the Kruskal form. In this case, components of the merging factor matrices tend to comprise information of the other components in higher rank CP decomposition. Hence, they are no longer rank-one matrices/tensors, and approximation of merging components by rank-one tensors cannot yield good approximate to the true factors. To this end, low-rank approximations to merging components are proposed, and components are estimated through two major stages

November 19, 2012 DRAFT 13

39

38

37

MSAE (dB) 36 Factor−1 Factor−2 Factor−3 35 Factor−4 Factor−5 34 CRIB

[(1,4,5),2,3] [1,2,(3,4,5)] [1,(2,3),(4,5)] Folding

Fig.3. Affect of unfolding rules to the accuracy loss in decomposition of order-5 tensors of size In = R = 10 with c1 = 0.1, c2 = c3 = c4 = 0.7 and c5 = 0.8. Mean SAEs (dB) were computed for all the components over 100 Monte Carlo runs.

1) Construct an order-N structured Kruskal tensor Y˜ J from the order-M rank-R Kruskal tensor which

approximates the unfolded tensor Y~l. Y˜ J often has higher rank than R.

2) Approximate Y˜ J by a rank-R Kruskal tensor which is the final result. The algorithm is first derived for unfolding two modes, and extended to multimode unfolding.

A. Unfolding two modes

We consider a rank-R CPD of Y and a simple unfolding l = [1,..., N − 2, (N − 1, N)] R (1) (2) (N−1) Y~l = λr br ◦ br ◦···◦ br + E. (19) Xr=1 (N−1) T Assume matrices Fr = reshape(br , [IN−1 × IN]) have rank-Jr (1 ≤ Jr ≪ IN−1), i.e., Fr = UrΣrVr , for σ σ r = 1, 2,..., R, where Σr = diag( r) and singular values r = [σr1, σr2,...,σrJr ], 1 ≥ σr1 ≥ σr2 ≥···≥ Jr 2 = (N−1) = σrJr > 0, j=1 σrj 1. By replacing all br by matrices Ur and Vr for r 1, 2,..., R, and replicating P (n) components br (n = 1, 2,..., N − 1) Jr times, we generate an order-N rank-J tensor in the Kruskal form where J = r Jr. R P e (1) (N−1) (N) Lemma 4.1: The order-N rank-J Kruskal tensor YJ = ~λ˜; A˜ ,..., A˜ , A˜ , where J = Jr, Xr=1

November 19, 2012 DRAFT 14

T ≤ ≪ λ˜ = T T T ∈ RJ (R J R min(IN−1, IN), λ11J λ21J ... λR1J , and  1 2 R  B(n) M ∈ RR×J, n = 1,..., N − 2,   ˜ (n)  RI3×J A =  U1 U2 ... UR ∈ , n = N − 1, " #  I ×J  V V ... V ∈ R 4 , Vr = Vr diag(σr) n = N, " 1 2 R #  = blkdiag e M  e e(11×J1 , 11×eJ2 ,..., 11×JR ),

e 2 2 has the same approximation error of the best rank-R CPD of Y~l, i.e., kY − YJkF = kEkF. e b If Jr = 1 for all r, YJ is the approximation Y of the true factors as pointed out in the previous e section. Otherwise, the order-N rank-J Kruskal tensor YJ is approximated by a rank-R Kruskal tensor bY. Note that this procedure does not access and manipulate on the real data Ye. For example, the mode- e (k) n CP-gradients of YJ with respect to A (k , n, n = 1, 2,..., N) which are the largest workload in CP algorithms such as ALS, OPT, dGN can be quickly computed as illustrated in Appendix A with a N−2 O + + 2 + low computational complexity of order JR(IN−1 IN) R  In N. It means that computation of  Xn=1  ˆ (k)    ˆ (k) YJ(n) − Y(n) ⊙k,n A is much faster than the computation on the raw data Y(n) − Y(n) ⊙k,n A .     (n) e     Ine other words, estimation of factors A from the Kruskal tensor YJ is relatively fast.

When the matrices Fr (r = 1, 2,..., R) have not exact low-rank representation, we consider their Jr 2 truncated SVDs such that ρr = σrj ≥ τ , 0 ≪ τ ≤ 1. The parameter τ controls the loss of fit by low Xj=1 rank approximations. The higher τ, the lower loss of fit, but the higher the approximation ranks. In the simulations, we set τ ≥ 0.98. e Let YR denote the solution of the rank-one FCP algorithm (i.e., using Algorithm 1)

e λ (1) (N−2) YR =  R; B ,..., B , [u11, u21,..., uR1] , [v11, v21,..., vR1] , (20)

λ RR (n−1) min(IN−1,IN ) where R = [λ1σ11, λ2σ21,...,λRσR1] ∈ . It is straightforward to see that br = q=1 σrq (vrq ⊗ (n) (n) (n) P Y urq), r = 1, 2,..., R. Because B = [b1 ,..., bR ], (n = 1,..., N −1), forms the best rank-R CPD of ~l, each vector σrq(vrq ⊗ urq), (r = 1,..., R), contributes to achieve the optimal approximation error kEkF in

(19). Discarding any set of singular components (vrq ⊗ urq) will increase the approximation error. The more singular vectors to be eliminated, the higher approximation error of Y~l. It means that the tensor e e YR has higher approximation error than the tensor YJ. That is

2 e 2 e 2 kEkF ≤kY − YJkF ≤kY − YRkF, (21) or performance of FCP using low-rank approximations is better than that using Algorithm 1.

November 19, 2012 DRAFT 15

B. Unfolding M modes

Consider an example where unfolding M modes are l = [1,..., N − M, (N − M +1,..., N)]. In this case, truncated SVD is no longer applied to approximate tensors Fr of size IN−M+1 ×···× IN and vec(Fr) = (N−M) br . However, we can apply low-rank Tucker decomposition to Fr

(1) (2) (M) Fr ≈ ~Sr; Ur , Ur ,..., Ur , (r = 1,..., R), (22)

(m) (r) where Ur are orthonormal matrices of size (IN−M+m−1 × Trm), and the core tensor Sr = [st ] is of size

Tr1 × Tr2 ×···× TrM, t = [t1, t2,..., tM]. In order to estimate an order-N rank-R Kruskal tensor bY, Tucker tensors in (22) are converted to (r) an equivalent Kruskal tensor of rank-(T1T2 ... TM). However, we select only the most Jr dominant st

(1 ≤ Jr ≪ T1T2 ... TM), t ∈T = {t1, t2,..., tJr } among all coefficients of Sr so that

(r) 2 2 ρr = (st ) ≥ τ kFrkF. (23) Xt∈T

The tensors Fr have rank-Jr approximations in the Kruskal form

λ (1) (2) (M) Br ≈ ~ r; Ur Mr1, Ur Mr2,..., Ur MrM, (r = 1, 2,..., R),

(r) (r) (r) where λr = [s , s ,..., s ] and Mrm (m = 1, 2,..., M) are indicator matrices of size Tm × Jr which t1 t2 tJr have only Jr non-zero entries at Mrm(t j,m, j) = 1, t j = [t j,1, t j,2,..., t j,M] ∈T , j = 1,..., Jr. (N−1) Combination of R rank-Jr CP approximations for components br yields a rank-J Kruskal tensor e R e YJ (J = r Jr) as mentioned in the previous section (see Lemma 4.1). A rank-R CPD of YJ will give us an approximateP to the true solution. An alternative approach is that we consider M-mode unfolding as (M − 1) two mode unfoldings. For example, since (1, 2, 3) ≡ (1, (2, 3)), the factor matrices are then sequentially estimated using the method in Section IV-A. Indeed, this sequential method is recommended because it is based on SVD and especially low-rank approximation to matrix is well-defined.

C. The proposed Algorithm

When the tensor Y is unfolded by a complex unfolding rule l which comprises multiple two-modes or M-mode unfoldings such as l = [(1, 2), (3, 4, 5), (7, 8)], construction of a rank-J structured Kruskal tensor becomes complicated. In such case, the factor reconstruction process in section IV-A or section IV-B is sequentially applied to mode to be unfolded. In Algorithm 2, we present a simple version of FCP using low-rank approximation to merge components. The algorithm reduces the tensor order from N

November 19, 2012 DRAFT 16

Algorithm 2: FCP

Input: Data tensor Y: (I1 × I2 ×···× IN), rank R, threshold τ(≥ 0.98) Unfolding rule l = [l1, l2,..., lM], lm = [lm(1), lm(2),..., lm(Km)] Output: λ ∈ RN, N matrices A(n) ∈ RIn×R begin % Stage 1: Tensor unfolding and compression ------(1) (M) 1 ~G, U ,..., U  = TD(Y~l, min(I, R)) % Tucker decomposition of order-M Y~l % Stage 2: CPD of the unfolded tensor ------2 ~λ; B(1),..., B(M) = CPD(G, R) % order-M CPD of the core tensor 3 for m = 1, 2,..., M do A(m) ← U(m) B(m) % Back projection of TD % Stage 3: Sequential estimation of factor matrices ------4 M = M for m = M, M − 1,..., 1 do e if Km ≥ 2 then for k = 1, 2,..., Km do % Stage 3a: Construction of rank-J Kruskal tensor ------5 m = m + k − 1, λ˜ = [], A˜ (m) = [], A˘ (m) = [], M = [] for r = 1, 2,..., R do e e (m) K 6 e = reshape m Fr (ar , [Ilm(k), i=k+1 Ilm(i)]) σ e T % truncated SVD such that kσ k2 ≥ 7 Fr ≈ Ur diag( r) Vr Q r 2 τ (m) (m) (m+1) (m+1) 8 λ˜ ← [λ˜, λr σr], A˜ ← [A˜ , Ur], A˜ ← [A˜ , Vr] (m) (m) (m+1) (m+1) 9 λr ← λr σr1, A˘ e← [A˘ ,eur1], A˘ e ← [A˘ e , vr1] e e e e 10 M ← blkdiag(M, 11×Jr ) % Stage 3b: Rank-J to rank-R Kruskal tensor ------e (1) (m−1) (m) (m+1) (m+1) (M) 11 YR = ~λ; A ,..., A , A˘ , A˘ , A ,..., A  R e e e e e if r=1 Jr > R then e (1) (m−1) (m) (m+1) (m+1) (M) 12 PYJ = ~λ; A M,..., A M, A˜ , A˜ , A M,..., A M (1) (2) (M) e e e ee e e 13 ~λ; A , A ,..., A  = structuredCPD(YJ, R, YR) e 14 M = M + 1,

% Stage 4: Refinemente e if needed ------15 ~λ; A(1),..., A(N) = CPD(Y, R, ~λ; A(1), A(2),..., A(N))

TD(Y, R): rank-R Tucker decomposition of order-N tensor Y where R = [R1, R2,..., RN]. b Y = CPD (Y, R, Yinit): approximates an order-N tensor or a tensor in the Kruskal form Y by a rank-R b Kruskal tensor Y using initial values Yinit.

to M (e.g., 3) specified by the unfolding rule l = [l1, l2,..., lM], where each group of modes lm = M [lm(1), lm(2),..., lm(Km)], Km ≥ 1 and m=1 Km = N. P Tucker compression can be applied to the unfolded tensor Y~l [46]. The factor matrices are sequen-

tially recovered over unfolded modes m, i.e., modes have Km ≥ 2. The sequential reconstruction of

factors is also executed over (Km − 1) runs. Each run constructs an order-M rank-J Kruskal tensor, and e b approximates it by an order-M rank-R Kruskal tensor using YR to initialize.e It also indicates that Y has e November 19, 2012 DRAFT 17

e better approximation error than that of YR. The tensor order M gradually increases from M to N. The full implementation of FCP provided at http://www.bsp.brain.riken.jp/e ∼phan/tensorbox.php includes other multimode unfoldings. Although a rank-R CPD of unfolded tensor has lower approximation error than the best rank-R CPD of the original tensor, for difficult data with collinear components or under-rank approximation (R is much lower than the true rank), CPDs of the unfolded tensors and structured Kruskal tensors are often proceeded with a slightly higher rank R. For some cases, a refinement stage may be needed to obtain the best solution. That is the approximation solution after low-rank approximations is used to initialize CPD of the raw data. This stage often requires lower number of iterations than CPD with random or SVD-based initializations.

V. Simulations

Throughout the simulations, the ALS algorithm factorized data tensors in 1000 iterations and stopped when ε ≤ 10−8. The FCP algorithm itself is understood as Algorithm 2 with low-rank approximation. Otherwise, the FCP algorithm with rank-one approximation is denoted by R1FCP. ALS was also utilized in FCP to decompose unfolded tensors.

Example 4 (Decomposition of order-6 tensors.) We analyzed the mean SAEs (MSAE) of algorithms in decomposition of order-6 tensors with size In = R = 20 by varying the number of low collinear factors from 1 to 6 with [c1 ≤ c2 ≤···≤ c6]. Tensors were corrupted with additive Gaussian noise of 0 dB SNR. ALS was not efficient and often got stuck in local minima. MSAEs over all the estimated components by ALS were clearly lower than CRIB, especially when there were 5 collinear factors (the first test case in Fig 4(a)). The FCP method was executed with “good unfolding” rules suggested by the strategy in Section III-B and “bad” ones which violated the unfolding strategy listed in Table II. Performance of R1FCP (Algorithm 1) was strongly affected by the unfolding rules. Its SAE loss was up to 21dB with “bad unfoldings”. For all the test cases, FCP with low-rank approximations (i.e., Algorithm 2) obtained high performance even with “bad unfolding” rules. In addition, FCP was much faster than ALS. FCP factorized order-6 tensors in 10-20 seconds while ALS completed the similar tasks in 500-800 seconds. Finally, in this simulation, FCP was 47 times faster on average than ALS.

Example 5 (Factorization of Event-Related EEG Time-Frequency Representation.) This example illustrates an application of CPD for analysis of real-world EEG data [9], [54] which consisted of 28 inter- trial phase coherence (ITPC) measurements [55] of EEG signals of 14 subjects during a proprioceptive pull

November 19, 2012 DRAFT 18

TABLE II

Comparison of MSAEs (in dB) of ALS and FCP with CRIB in decomposition of order-6 rank-20 tensors of size In = 20, for all n.Correlation coefficients of factors cn are 0.1 or 0.9 and [c1 ≤ c2 ≤···≤ c6]. The performance was measured by varying the

number of cn = 0.1. R1FCP (Jr = 1, for all r) was sensitive to unfolding rules.

“Good unfolding” “Bad unfolding” No. 0.1 CRIB ALS Unfolding Alg.1 Alg.2 Unfolding Alg.1 Alg.2 1 42.33 10.8 [1, (2, 3, 4), (5, 6)] 40.49 41.95 [2, 3, (1, 4, 5, 6)] 31.33 40.78 2 49.40 44.3 [1, 2, (3, 4, 5, 6)] 48.64 48.65 [3, 4, (1, 2, 5, 6)] 41.06 47.50 3 52.16 48.0 [1, 2, (3, 4, 5, 6)] 52.06 52.07 [4, 5, (1, 2, 3, 6)] 41.94 50.89 4 52.26 46.8 [1, 2, (3, 4, 5, 6)] 52.23 52.23 [1, (2, 3), (4, 5, 6)] 51.02 51.31 5 52.26 36.2 [1, 2, (3, 4, 5, 6)] 48.78 51.24 [(1, 2), (3, 4), (5, 6)] 30.54 51.44 6 52.26 30.5 [1, 2, (3, 4, 5, 6)] 51.13 52.19 [(1, 2), (3, 4), (5, 6)] 31.04 47.75

S 1000 55

50 ALS 45 rank−1, good folding 100 40 rank−1, bad folding low rank, good folding

SAE (dB) 35 low rank, bad folding 10 30 ALS rank−1, good folding

−10 log 25 10

rank−1, bad folding Execution time (secs) 20 low rank, good folding low rank, bad folding 15 CRIB 10 1 1 2 3 4 5 6 1 2 3 4 5 6 No. c = 0.1 No. c = 0.1 n n

(a) SAE loss. (b) Execution time.

Fig. 4. Illustration of MSAE loss and execution time averaged over 30 MC runs in decomposition of order-6 rank-20 tensors of size I = R = 20 corrupted with additive Gaussian noise of 0 dB SNR, and cn ∈{0.1, 0.95}.

of the left and right hands. The whole ITPC data set was organized as a 4-way tensor of 28 measurements × 61 frequency bins × 64 channels × 72 time frames. The first 14 measurements were associated to a group of the left hand stimuli, while the other ones were with the right hand stimuli. The order-4 ITPC tensor can be fitted by a multilinear CP model. Mørup et al. analyzed the dataset by nonnegative CP of three components and Tucker components and compared them with components extracted by NMF and ICA [54]. In this example, our aim was to compare the factorization time of ALS and FCP over various R in

November 19, 2012 DRAFT 19

TABLE III Comparison of fit (%) values in factorization of the ITPC tensor by ALS and FCP in Example 5. Rank-1 FCP (i.e., Algorithm 1) completely failed in this example.Strikethrough values mean that the algorithm did not converge to the desired solution.

R ALS Rank-1 FCP Low-rank FCP Tucker → ALS Tucker → Alg. 2 [(1,2), 3, 4] [1,2, (3, 4)] 5 37.7 ± 0.17 32.7 36.0 37.7 ± 0.17 36.2 ±0.03 36.9 ±0.00 8 43.8 ± 0.13 26.7 42.1 43.8 ± 0.00 42.7 ±0.01 42.9 ±0.58 11 48.0 ± 0.04 19.5 32.5 47.9 ± 0.16 47.1 ±0.09 47.5 ±0.08 20 56.0 ± 0.12 -61.1 -10.8 55.9 ± 0.16 55.5 ±0.18 55.8 ±0.13 30 61.1 ± 0.09 -519 -301.0 61.1 ± 0.09 60.8 ±0.07 60.9 ±0.13 40 64.5 ± 0.08 -649 -319.1 64.4 ± 0.14 64.2 ±0.08 64.3 ±0.18 60 68.7 ± 0.02 -1295 -432.8 68.1 ± 0.05 68.6 ±0.09 68.6 ±0.10 72 70.4 ± 0.02 -7384 -535.0 69.9 ± 0.08

TABLE IV Performance of rank-1 FCP with different unfolding rules in decomposition of the ITPC tensor in Example 5. Strikethrough values mean that the algorithm did not converge to the desired solution.

Rank-1FCP ALS R Unfolding Fit Time Collinearity degree Fit Time

rules (%) (secs) [c1, c2, c3, c4] (%) (secs) [(1, 2), 3, 4] 26.7 2.15 [0.48, 0.70, 0.54, 0.89] [1, (2, 3), 4] 29.9 5.72 [0.37, 0.50, 0.40, 0.83] 8 43.8 242 [1, 2, (3, 4)] 42.1 5.59 [0.34, 0.50, 0.49, 0.72] [1, 3, (2, 4)] 41.3 4.67 [0.49, 0.52, 0.50, 0.75] [(1, 2), 3, 4] -61.1 5.79 [0.43, 0.62, 0.50, 0.89] [1, (2, 3), 4] 28.3 12.12 [0.34, 0.45, 0.56, 0.90] 20 56.0 752 [1, 2, (3, 4)] 15.4 10.39 [0.33, 0.48, 0.46, 0.86] [1, 3, (2, 4)] -22.6 9.17 [0.39, 0.57, 0.69, 0.92]

the range of [5, 72] with and without a Tucker compression prior to the CP decompositions. The FCP method employed ALS to factorize the order-3 unfolded tensor, and the fast ALS for the structured Kruskal tensors (see in Appendix A). Interpretation of the results can be found in [9], [54]. The low-rank FCP algorithm was applied with the unfolding rule l = [1, 2, (3, 4)]. Execution time for each algorithm was averaged over 10 Monte Carlo runs with different initial values and illustrated in Fig. 5(a) for various R. For relatively low rank R, a prior Tucker compression sped up ALS, and made it more efficient than FCP when R ≤ 11. The reason is explained by compression

November 19, 2012 DRAFT 20

0 3 10 10

−0.1 10 FCP

2 ALS −0.2 10 10

−0.3 10 0.4417 1

10 Approximation Error Compress Refine −0.4 Execution time (seconds) ALS 10 0.4399 FCP CPD Lowrank Tucker→ALS Approximation −0.5 Tucker→FCP 10 0 10 0 1 2 10 20 30 40 50 60 70 10 10 10 R Time (seconds)

(a) Execution time (seconds) (b) Execution time as R = 20

Fig. 5. Illustration of execution times (seconds) of ALS and FCP for factorization of order-4 ITPC tensor in Example 5.

1 1 [(1, 2), 3, 4] [(1, 2), 3, 4] [1, 2, (3, 4)] [1, 2, (3, 4)] 0.8 0.8

0.6 0.6

0.4 0.4 Relative intensivety Relative intensivety 0.2 0.2

0 0 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 Singular values Singular values

(a) R = 8 (b) R = 20

Fig. 6. Illustration of leading singular values of matrices Fr (r = 1, 2,..., R) reshaped from components estimated from the ITPC tensor with different unfolding rules [(1, 2), 3, 4] and [1, 2, (3, 4)]. Rank-1 FCP (i.e., Algorithm 1) failed in this experiment because this algorithm worked only if all Fr were rank-one matrices.

time for unfolding tensor in FCP. However, this acceleration technique was less efficient as R → In and inapplicable to ALS for R ≥ In. FCP significantly reduced the execution time of ALS by a factor of 5-60 times, and was slightly improved by the prior compression. Comparison of fits explained by algorithms in Table III indicates that while FCP with Algorithm 2 quickly factorized the data, it still maintained fit equivalent to ALS.

November 19, 2012 DRAFT 21

For this data, the rank-one FCP algorithm (i.e., Algorithm 1), unfortunately, did not work well. Fits of this algorithm are given in Table III. Performance of this algorithm with several unfolding rules including [(1, 2), 3, 4], [1, (2, 3), 4], [1, 2, (3, 4)] and [1, 3, (2, 4)] is compared in Table IV. When R = 8 and using the rule l = [(1, 2), 3, 4], the rank-1 FCP algorithm showed the worst performance with a fit of 26.7% which was not competitive to a fit of 43.8% obtained by ALS. The average collinearity degrees of the estimated components cn = [0.48, 0.70, 0.54, 0.89] indicates that we should not fold modes 1 and 2; in addition, folding modes 2 and 4 which had the largest collinear degrees is suggested, i.e., the unfolding rule l = [1, 3, (2, 4)]. It is clear to see that the unfolding rule l = [1, 3, (2, 4)] significantly improved performance of the rank-1 FCP algorithm with a fit of 41.3%. Moreover, the unfolding rule l = [1, 3, (2, 4)] was also suggested according to the average collinear degrees cn = [0.37, 0.50, 0.40, 0.83] achieved when applying the unfolding rule l = [1, (2, 3), 4]. This confirms the applicability of the suggested unfolding strategy. For this test case, the unfolding rule l = [1, 2, (3, 4)] allowed to achieve the best fit of 42.1%, although this rule was not suggested by the strategy. This can be understood due to the fact that the average collinear degrees of modes 2 and 3 were very similar (0.50 and 0.49, or 0.52 and 0.50, see in Table IV). For higher ranks, e.g., R ≥ 11, FCP with rank-one approximation completely failed. The unfolding strategy did not help anymore (see fit values in Table III). In Fig. 6, we display leading singular values of reshaped matrices Fr (r = 1, 2,..., R) from the estimated components for R = 8 and 20. The results indicate that Fr were not rank-one matrices, especially the matrices received when using the rule l =

[(1, 2), 3, 4]. Note that the rank-one FCP algorithm works if and only if all Fr are rank-one. This also confirms that the low-rank FCP algorithm was appropriate for this data. ˆ Fig. 5(b) illustrates the relative approximation errors ε = kY−YkF of ALS and FCP for R = 20 as kYkF functions of the execution time. ALS took 536.5 seconds to converge. FCP took 1.2 seconds for com- pression, 0.9 seconds for CPD of the order-3 unfolded tensor, 2.73 seconds for low-rank approximations,

2.1 seconds for the refinement stage. ALS and FCP converged to the relative approximation errors εALS

= 0.4417, while εFCP = 0.4399, respectively.

Example 6 (Decomposition of Gabor tensor of the ORL face database.) This example illustrated classification of the ORL face database [56] consisting of 400 faces for 40 subjects. We constructed Gabor feature tensors for 8 different orientations at 4 scales which were then down-sampled to 16 × 16 × 8 × 4 × 400 dimensional tensor Y. The unfolding l = [1, 2, (3, 4, 5)] was applied to unfold Y to be an order-3 tensor. ALS [23] factorized both Y and Y~l into R = 30, 40, 60 rank-1 tensors in 1000 iterations,

November 19, 2012 DRAFT 22

TABLE V Comparison between ALS and FCP (Alg. 2) in factorization of order-5 Gabor tensor constructed from the ORL face dataset.

Time R Algorithm Fit (%) Ratio ALS ACC (%) NMI (%) (seconds) FCP FCP 60.59 24 85.00 92.91 30 39 ALS 60.56 927 85.25 93.22 FCP 62.46 39 84.25 92.57 40 41 ALS 62.63 1599 85.50 93.68 FCP 65.47 105 83.00 91.62 60 162 ALS 65.64 16962 81.38 91.44

−6 ˆ 2 and stopped when kε − εoldk ≤ 10 ε where ε = kY − YkF. The rank-one FCP algorithm did not work for this data. For example, when R = 10 and applying the rule l = [1, 2, (3, 4, 5)], R1FCP explained the data with a fit of -31.2%, and yielded average collinearity degrees of [0.60, 0.66, 0.64, 0.95, 0.64]. Although a further decomposition with the unfolding rule l = [1, (2, 3), (4, 5)] achieved a better fit of 44.8%, this result was much lower than a fit of 54.5% obtained by ALS and FCP. The factor A(5) ∈ R400×R comprised compressed features which were used to cluster faces using the K-means algorithm. Table V compares performance of two algorithms including execution time, fit, accuracy (ACC %) and normalized mutual information (NMI). For R = 40, ALS factorized Y in 1599 seconds while FCP completed the task in only 39 seconds with a slightly reduction of fit (≈ 0.17%). For R = 60, ALS was extremely time consuming, required 16962 seconds while FCP only took 105 seconds. Regarding the clustering accuracy, features extracted by FCP still achieved comparable performance as those obtained by ALS.

VI. Conclusions

The fast algorithm has been proposed for high order and relatively large scale CPD. The method decomposes the unfolded tensor in lower dimension which is often of size R × R × R instead of the original data tensor. Higher order structured Kruskal tensors are then generated, and approximated by rank-R tensors in the Kruskal form using the fast algorithms for structured CPD. Efficiency of the strategy proposed for tensor unfoldings has been proven for real-world data. In addition, one important conclusion drawn from our study is that factor matrices in orthogonally constrained CPD for high order tensor can be estimated through decomposition of order-3 unfolded tensor without any loss of accuracy. Finally, the

November 19, 2012 DRAFT 23 proposed FCP algorithm has been shown 40-160 times faster than ALS for decomposition of order-5 and -6 tensors.

Appendix A Algorithms for structured CPD

A. Low complexity CP gradients

CP gradients of Ye in Lemma (4.1) with respect to A(n) is quickly computed without construction of Ye as follows

(k) (n) (n) Y˜ (n) − Yˆ (n) ⊙ A = A˜ diag(λ˜) Wn − A Γn (24)   k,n ! (n)T (n) (k)T (k) (n) (n) where Wn = ⊛k,n(A˜ A ) and Γn = ⊛k,n(A A ) . By replacing A˜ = B M, and employing     MT A ⊛ MT B = M T (A ⊛ B), M MT A ⊛ B = A ⊛ (MB), the first term in (24) is further expressed        for n = 1,..., N − 2 by

N−2 (n) (n) (k)T (k) A˜ diag(λ˜) Wn = B ⊛ (B A ) ⊛ K ,  k=1    k,n      T T (N−1) ⊛ T  (N) λ11R U1 A Vr A   .    K =  . e      λ 1T UT A(N−1) ⊛ VT A(N)   R R R R       and  e  T (N) ω λ1V1 A diag( 1)  .  ˜ (N−1) λ˜ = ˜ (N−1) . A diag( ) WN−1 A  e .  ,    T (N)   λRV A diag(ωR)   R     Te (N−1) ω  λ1U1 A diag( 1)  .  ˜ (N) λ˜ = ˜ (N) . A diag( ) WN A  .  ,    T (N−1)   λRU A diag(ωR)   R    ω N−2 (k)T (k)   where Ω = [ r] = ⊛k=1 (A B ). For each n = 1, 2,..., N, such computation has a low computational N−2 O 2 + + + = complexity of order R N In JR (IN−1 IN). For tensors which have In I, for all n, in the   Xn=1   worst case, J ≈ I, then J ≈ RI, we have  r     N−2 O 2 + + + = O 2 2 ≪ O N R N In JR (IN−1 IN) NR I RI ,   Xn=1           for N ≥ 4.    

November 19, 2012 DRAFT 24

B. Fast algorithms for structured CPD

By employing the fast CP gradient in previous section, most CP algorithms can be rewritten to estimate A(n) from the structured tensors Ye in Lemma (4.1). For example, the ALS algorithm is given by

N−2 (n) (n) (n)T (n) ⊛ −1 A ← B ⊛ (B A ) K Γn , n = 1,..., N − 2.  k=1    k,n      Appendix B Cramer´ -Rao induced bound for angular error

(1) (1) Denote by α1 the mutual angle between the true factor a1 and its estimatea ˆ 1 a(1)T aˆ (1) α = 1 1 , 1 acos (1) (1) (25) ka1 kkaˆ 1 k 2 Theorem B.1: [37] The Cram´er-Rao induced bound (CRIB) on α1 for rank-2 tensor is given by σ2 I − 1 (1 − c2)h2 y2 + z − h2z(z + 1) CRIB(a ) = 1 + 1 1 1 (26) 1 2  2 2 2 2 2 2  λ1 1 − h1 1 − h1 (1 − c1y − h1(z + 1)) + h1(y + c1z)   (n)T (n)   where cn = a1 a2 , and N hn = ck for n = 1,..., N, (27) 2Y≤k,n N h2(1 − c2) y = −c n n , (28) 1 2 − 2 2 Xn=2 cn hnc1 N 1 − c2 z = n . (29) 2 − 2 2 Xn=2 cn hnc1

References

[1] R.A. Harshman, “Foundations of the PARAFAC procedure: Models and conditions for an explanatory multimodal factor analysis,” UCLA Working Papers in Phonetics, vol. 16, pp. 1–84, 1970. [2] J.D. Carroll and J.J. Chang, “Analysis of individual differences in multidimensional scaling via an n-way generalization of Eckart–Young decomposition,” Psychometrika, vol. 35, no. 3, pp. 283–319, 1970. [3] C.M. Andersson and R. Bro, “Practical aspects of PARAFAC modelling of fluorescence excitation-emission data,” Journal of Chemometrics, vol. 17, pp. 200–215, 2003. [4] R. Bro, Multi-way Analysis in the Food Industry - Models, Algorithms, and Applications, Ph.D. thesis, University of Amsterdam, Holland, 1998. [5] G. Tomasi, Practical and Computational Aspects in Chemometric Data Analysis, Ph.D. thesis, Frederiksberg, Denmark, 2006. [6] L. De Lathauwer and J. Castaing, “Tensor-based techniques for the blind separation of DS-CDMA signals,” Signal Processing, vol. 87, no. 2, pp. 322–336, feb 2007.

November 19, 2012 DRAFT 25

[7] N.D. Sidiropoulos and R. Bro, “PARAFAC techniques for signal separation,” in Signal Processing Advances in Communications, P. Stoica, G. Giannakis, Y. Hua, and L. Tong, Eds., vol. 2, chapter 4. Prentice-Hall, Upper Saddle River, NJ, USA, 2000. [8] A.H. Andersen and W.S. Rayens, “Structure-seeking multilinear methods for the analysis of fMRI data,” NeuroImage, vol. 22, pp. 728–739, 2004. [9] M. Mørup, L. K. Hansen, C. S. Herrmann, J. Parnas, and S. M. Arnfred, “Parallel factor analysis as an exploratory tool for wavelet transformed event-related EEG,” NeuroImage, vol. 29, no. 3, pp. 938–947, 2006. [10] H. Becker, P. Comon, L. Albera, M. Haardt, and I. Merlet, “Multi-way space-time-wave-vector analysis for EEG source separation,” Signal Processing, vol. 92, no. 4, pp. 1021–1031, 2012. [11] B. W. Bader, M. W. Berry, and M. Browne, “Discussion tracking in Enron email using PARAFAC,” in Survey of Text Mining II, M. W. Berry and M. Castellanos, Eds., pp. 147–163. Springer London, 2008. [12] D. M. Dunlavy, T. G. Kolda, and E. Acar, “Temporal link prediction using matrix and tensor factorizations,” ACM Transactions on Knowledge Discovery from Data, vol. 5, no. 2, pp. Article 10, 27 pages, February 2011. [13] D. Gonzlez, A. Ammar, F. Chinesta, and E. Cueto, “Recent advances on the use of separated representations,” International Journal for Numerical Methods in Engineering, vol. 81, no. 5, pp. 637–659, 2010. [14] A. Shashua and T. Hazan, “Non-negative tensor factorization with applications to statistics and computer vision,” in Proc. of the 22-th International Conference on Machine Learning (ICML), Bonn, Germany, 2005, pp. 792–799, ICML. [15] I. Ibraghimov, “Application of the three-way decomposition for matrix compression,” Numerical Linear Algebra with Applications, vol. 9, no. 6-7, pp. 551–565, 2002. [16] A. N. Langville and W. J. Stewart, “A Kronecker product approximate preconditioner for SANs,” Numerical Linear Algebra with Applications, vol. 11, no. 8-9, pp. 723–752, 2004. [17] D. V. Savostyanov, E. E. Tyrtyshnikov, and N. L. Zamarashkin, “Fast truncation of mode ranks for bilinear tensor operations,” Numerical Linear Algebra with Applications, vol. 19, no. 1, pp. 103–111, 2012. [18] P. Constantine, A. Doostan, Q. Wang, and G. Iaccarino, “A surrogate accelerated Bayesian inverse analysis of the HyShot II supersonic combustion data,” in Proceedings of the Summer Program 2010. 2010, AIAA-2011-2037, Center for Turbulent Research. [19] W. Hackbusch, B. N. Khoromskij, and E. E. Tyrtyshnikov, “Hierarchical Kronecker tensor-product approximations,” Journal of Numerical Mathematics, vol. 13, no. 2, pp. 119–156, Sept. 2005. [20] W. Hackbusch and B. N. Khoromskij, “Tensor-product approximation to operators and functions in high dimensions,” Journal of Complexity, vol. 23, no. 46, pp. 697 – 714, 2007. [21] I. V. Oseledets and E. E. Tyrtyshnikov, “Breaking the curse of dimensionality, or how to use SVD in many dimensions,” SIAM Journal on Scientific Computing, vol. 31, no. 5, pp. 3744–3759, Jan. 2009. [22] A.-H. Phan and A. Cichocki, “Tensor decompositions for feature extraction and classification of high dimensional datasets,” Nonlinear Theory and Its Applications, IEICE, vol. 1, pp. 37–68 (invited paper), 2010. [23] C.A. Andersson and R. Bro, “The N-way toolbox for MATLAB,” Chemometrics and Intelligent Laboratory Systems, vol. 52, no. 1, pp. 1–4, 2000. [24] M. Rajih, P. Comon, and R. A. Harshman, “Enhanced line search: A novel method to accelerate PARAFAC,” SIAM Journal on Matrix Analysis and Applications, vol. 30, no. 3, pp. 1128–1147, 2008. [25] Y. Chen, D. Han, and L. Qi, “New ALS methods with extrapolating search directions and optimal step size for complex- valued tensor decompositions,” IEEE Transactions on Signal Processing, vol. 59, no. 12, pp. 5888–5898, 2011.

November 19, 2012 DRAFT 26

[26] P. Paatero, C. Navasca, and P. Hopke, “Fast rotationally enhanced alternating-least-squares,” Workshop on Tensor Decompositions and Applications (TDA 2010), SIAM, 2010. [27] H.A.L. Kiers, “A three-step algorithm for CANDECOMP/PARAFAC analysis of large data sets with multicollinearity,” Journal of Chemometrics, vol. 12, no. 3, pp. 155–171, 1998. [28] E. Acar, D. M. Dunlavy, and T. G. Kolda, “A scalable optimization approach for fitting canonical tensor decompositions,” Journal of Chemometrics, vol. 25, no. 2, pp. 67–86, February 2011. [29] P. Paatero, “A weighted non-negative least squares algorithm for three-way PARAFAC factor analysis,” Chemometrics Intelligent Laboratory Systems, vol. 38, no. 2, pp. 223–242, 1997. [30] P. Tichavsk´yand Z. Koldovsk´y, “Simultaneous search for all modes in multilinear models,” 2010, pp. 4114 – 4117, Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP10). [31] A.-H. Phan, P. Tichavsk´y, and A. Cichocki, “Low complexity damped Gauss-Newton algorithms for CANDECOMP/PARAFAC,” SIAM, Journal on Matrix Analysis and Applications, accepted, available at http://arxiv.org/abs/1205.2584, 2012. [32] A.-H. Phan, P. Tichavsk´y, and A. Cichocki, “Fast damped Gauss-Newton algorithm for sparse and nonnegative tensor factorization,” in Proc. of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2011, pp. 1988 – 1991. [33] L. De Lathauwer, “A link between the canonical decomposition in and simultaneous matrix diagonalization,” SIAM Journal on Matrix Analysis and Applications, vol. 28, pp. 642–666, 2006. [34] F. Roemer and M. Haardt, “A closed-form solution for multilinear PARAFAC decompositions,” in Proc. 5-th IEEE Sensor Array and Multichannel Signal Processing Workshop (SAM 2008), July 2008, pp. 487 – 491. [35] L. De Lathauwer and J. Castaing, “Blind identification of underdetermined mixtures by simultaneous matrix diagonaliza- tion,” IEEE Transactions on Signal Processing, vol. 56, no. 3, pp. 1096–1105, 2008. [36] A. Franc, Etude algebrique des multitableaux: apports de l’algebrique tensorielle, Ph.D. thesis, Universit Montpellier II, 1992. [37] P. Tichavsk´y, A. H. Phan, and Z. Koldovsk´y, “Cram´er-Rao-Induced Bounds for CANDECOMP/PARAFAC tensor decomposition,” ArXiv e-prints, Sept. 2012, http://arxiv.org/abs/1209.3215. [38] T.G. Kolda and B.W. Bader, “Tensor decompositions and applications,” SIAM Review, vol. 51, no. 3, pp. 455–500, September 2009. [39] A. Cichocki, R. Zdunek, A.-H. Phan, and S. Amari, Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-way Data Analysis and Blind Source Separation, Wiley, Chichester, 2009. [40] Ch. F. Van Loan, “Lecture 5: The CP representation and tensor rank,” in International Summer School on Numerical Linear Algebra, From Matrix to Tensor: The Transition to Numerical Multilinear Algebra. SIAM, Italy, 2010. [41] F.L. Hitchcock, “Multiple invariants and generalized rank of a p-way matrix or tensor,” Journal of Mathematics and Physics, vol. 7, pp. 39–79, 1927. [42] A.-H. Phan, P. Tichavsk´y, and A. Cichocki, “On fast computation of gradients for CANDECOMP/PARAFAC algorithms,” CoRR, vol. abs/1204.1586, 2012. [43] L.R. Tucker, “Some mathematical notes on three-mode factor analysis,” Psychometrika, vol. 31, pp. 279–311, 1966. [44] L.R. Tucker, “The extension of factor analysis to three-dimensional matrices,” in Contributions to Mathematical Psychology, H. Gulliksen and N. Frederiksen, Eds., pp. 110–127. Holt, Rinehart and Winston, New York, 1964.

November 19, 2012 DRAFT 27

[45] L. De Lathauwer, B. De Moor, and J. Vandewalle, “On the best rank-1 and rank-(R1,R2,. . . ,RN) approximation of higher-order tensors,” SIAM Journal on Matrix Analysis and Applications, vol. 21, no. 4, pp. 1324–1342, 2000. [46] C.A. Andersson and R. Bro, “Improving the speed of multi-way algorithms: Part I. Tucker3,” Chemometrics and Intelligent Laboratory Systems, vol. 42, pp. 93–103, 1998. [47] S. Ragnarsson and C. F. Van Loan, “Block tensor unfoldings,” SIAM Journal on Matrix Analysis and Applications, vol. 33, no. 1, pp. 149–169, 2012. [48] P. Tichavsk´yand Z. Koldovsk´y, “Stability of CANDECOMP-PARAFAC tensor decomposition,” in Proc. of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2011, pp. 4164–4167. [49] Z. Koldovsk´y, P. Tichavsk´y, and A.-H. Phan, “Stability analysis and fast damped Gauss-Newton algorithm for INDSCAL tensor decomposition,” in Statistical Signal Processing Workshop (SSP), IEEE, 2011, pp. 581–584. [50] B. C. Mitchell and D. S. Burdick, “Slowly converging PARAFAC sequences: Swamps and two-factor degeneracies,” Journal of Chemometrics, vol. 8, pp. 155–168, 1994. [51] P. Comon, X. Luciani, and A. L. F. de Almeida, “Tensor decompositions, alternating least squares and other tales,” Journal of Chemometrics, vol. 23, no. 7-8, pp. 393–405, 2009. [52] M. B. Dosse, J. M.F. Berge, and J. N. Tendeiro, “Some new results on orthogonally constrained candecomp,” Journal of Classification, vol. 28, pp. 144–155, 2011. [53] M. Sorensen, L. De Lathauwer, P. Comon, S. Icart, and L. Deneire, “Canonical polyadic decomposition with orthogonality constraints,” SIAM Journal on Matrix Analysis and Applications, p. accepted, 2012. [54] M. Mørup, L.K. Hansen, and S.M. Arnfred, “ERPWAVELAB a toolbox for multi-channel analysis of time-frequency transformed event related potentials,” Journal of Neuroscience Methods, vol. 161, no. 2, pp. 361–368, 2006. [55] C. Tallon-Baudry, O. Bertrand, C. Delpuech, and J. Pernier, “Stimulus specificity of phase-locked and non-phase-locked 40 Hz visual responses in human,” Journal of Neuroscience, vol. 16 (13), pp. 4240–4249, 1996. [56] F. Samaria and A.C. Harter, “Parameterisation of a stochastic model for human face identification,” in Proceedings of the Second IEEE Workshop on Applications of Computer Vision, 1994.

November 19, 2012 DRAFT