<<

NeuroImage 167 (2018) 73–83

Contents lists available at ScienceDirect

NeuroImage

journal homepage: www.elsevier.com/locate/neuroimage

Truncated RAP-MUSIC (TRAP-MUSIC) for MEG and EEG source localization

Niko Makel€ a€ a,b,*, Matti Stenroos a, Jukka Sarvas a, Risto J. Ilmoniemi a,b a Department of Neuroscience and Biomedical Engineering (NBE), Aalto University School of Science, Espoo, Finland b BioMag Laboratory, HUS Medical Imaging Center, Helsinki University Hospital (HUH), Helsinki, Finland

ARTICLE INFO ABSTRACT

Keywords: Electrically active brain regions can be located applying MUltiple SIgnal Classification (MUSIC) on magneto- or Magnetoencephalography electroencephalographic (MEG; EEG) data. We introduce a new MUSIC method, called truncated recursively- Electroencephalography applied-and-projected MUSIC (TRAP-MUSIC). It corrects a hidden deficiency of the conventional RAP-MUSIC MEG algorithm, which prevents estimation of the true number of brain-signal sources accurately. The correction is EEG done by applying a sequential dimension reduction to the signal-subspace projection. We show that TRAP-MUSIC Source localization significantly improves the performance of MUSIC-type localization; in particular, it successfully and robustly Inverse methods locates active brain regions and estimates their number. We compare TRAP-MUSIC and RAP-MUSIC in simula- Multiple sources tions with varying key parameters, e.g., signal-to-noise ratio, correlation between source time-courses, and initial estimate for the dimension of the signal space. In addition, we validate TRAP-MUSIC with measured MEG data. We suggest that with the proposed TRAP-MUSIC method, MUSIC-type localization could become more reliable and suitable for various online and offline MEG and EEG applications.

Introduction sources better, as uncorrelatedness is not assumed in defining the MUSIC localizer. Furthermore, RAP-MUSIC provides an important solu- Multiple signal classification (MUSIC) and its recursively-applied tion to the difficult peak-detection problem of the basic MUSIC, and al- version (RAP-MUSIC) are standard methods for locating active brain lows a straightforward automation of the multiple source localization regions in magneto- and electroencephalography (MEG/EEG). MUSIC- task (Mosher and Leahy, 1999). While RAP-MUSIC can be used in type source localization is based on dividing the data space into signal analyzing EEG/MEG offline for both event-related (Pascarella et al., and noise subspaces, and testing for each candidate source in the region- 2010; Cheyne et al., 2006) and spontaneous brain signals (Ewald et al., of-interest (ROI), whether its topography belongs to the signal subspace 2014; Baker et al., 2014; Groβ et al., 2001), it's computational efficiency or not Schmidt (1986); Mosher and Leahy (1999). makes it a particularly intriguing tool for online applications, such as Thanks to their simplicity/intuitiveness, computational efficiency, closed-loop EEG systems and brain–computer interfaces (Dinh et al., and insensitivity to measurement noise as well as to temporal correla- 2012, 2014; Sanchez et al., 2014; Birbaumer et al., 2009). tions between the active sources, MUSIC methods, especially RAP- However, we tested the performance of RAP-MUSIC on simulated MUSIC, have been found useful in various offline and online MEG/EEG EEG data and observed that it systematically overestimated the number applications (Mosher and Leahy, 1999; Mosher et al., 1999; Dinh et al., of the sources, i.e., it gave false positives. This raised a question whether 2012, 2014). MUSIC-framework offers probably one of the simplest in- there is something wrong or suboptimal in the conceptually beautiful verse methods that incorporates both spatial and temporal information in RAP-MUSIC. Therefore, we decided to investigate the RAP-MUSIC al- source localization from EEG/MEG (Mosher and Leahy, 1999). MUSIC is gorithm in detail. closely related to beamformers, which are extensively used in analysis of We introduce a novel recursive MUSIC-type method called truncated EEG/MEG (Sekihara and Nagarajan, 2008). One difference is that, unlike RAP-MUSIC (TRAP-MUSIC), which corrects a hidden limitation in the beamformers, MUSIC does not provide the time-courses during the conventional RAP-MUSIC algorithm. We call this limitation ’the RAP localization process; they need to be estimated separately. Unlike dilemma’. Due to the RAP dilemma, the recursive process may leave beamformer, the MUSIC procedure does not require the inversion of the unwanted residuals to the data model; these residuals can lead to high data covariance matrix. MUSIC should also tolerate time-correlated localizer values and thus to misleading interpretation of the location and

* Corresponding author. Department of Neuroscience and Biomedical Engineering (NBE), Aalto University School of Science, Espoo, Finland. E-mail address: niko.makela@aalto.fi (N. Makel€ a).€ https://doi.org/10.1016/j.neuroimage.2017.11.013 Received 6 June 2017; Received in revised form 18 October 2017; Accepted 4 November 2017 Available online 8 November 2017 1053-8119/© 2017 Elsevier Inc. All rights reserved. N. Makel€ a€ et al. NeuroImage 167 (2018) 73–83

ð ; η Þ number of the sources. We explain and analyze the RAP dilemma both We assume that the columns of A, l pj j , are linearly independent, theoretically and in numerical simulations, and show how TRAP-MUSIC i.e., rankðAÞ¼n. Also, we assume that the time-courses S are time- overcomes this problem. TRAP-MUSIC is a ’corrected’ version of the centered, i.e., have zero mean, and that they are non-vanishing and lin- conventional RAP-MUSIC; TRAP-MUSIC applies a sequential dimension early independent, so that rankðSÞ¼n. Note that the time-courses may reduction to the signal-space estimate. We show that our correction al- T be correlated, i.e., si sj , where T denotes transpose, may be non-zero for lows robust and reliable estimation of the number of the sources, which is i≠j. Finally, we assume that a sufficiently accurate forward model often not possible with RAP-MUSIC. is available. New MUSIC methods have been recently developed to serve in spe- cific applications, e.g., for locating extended sources (ExSo-MUSIC (Birot et al., 2011);), or for locating synchronous activity, either by exploiting MUSIC source clustering (POP-MUSIC (Liu and Schimpf, 2006);), or the imagi- nary part of the cross-spectrum (Wedge-MUSIC; (Ewald et al., 2014), The objective of MUSIC-type methods is to estimate the source pa- SC-MUSIC (Shahbazi et al., 2015);). We point out that most of these rameters from the noisy measurement data Y (Mosher and Leahy, 1998, MUSIC variations have added their modifications on top of the conven- 1999; Schmidt, 1986) with the aid of spatial/physical information of the tional RAP-MUSIC; they first apply RAP-MUSIC, and subsequently apply forward model and temporal information of the measured time-series their novel extra step. Therefore, it is possible that at least some issues signals. In other words, one wants to find the sources, typically dipoles ð ; η Þ; …; ð ; η Þ due to the RAP-dilemma may have been inherited by methods that use p1 1 pn n . In practice, localization is done by a localizer the actual RAP-MUSIC. Contrary to these methods, our improvement is (function), which is evaluated at all source-location candidates in the not an additional step, but a correction applied within the iterative discretized scanning grid overlaid on the ROI. The maxima of the local- MUSIC algorithm. izer correspond to the estimated source locations. It is worth noticing that We compare TRAP-MUSIC with RAP-MUSIC, providing both illus- real sources are not point-like, and they are not exactly at the scanning trative examples and statistical evidence from extensive simulations with grid points—neither in practice and nor in our simulations (Kaipio and varying key parameters, e.g., SNR, number of recursion steps and tem- Somersalo, 2007). poral correlation between sources. In addition, using reader-friendly MUSIC algorithms are based on the separation of data space, span (YÞ, basic linear algebra, we give a solid mathematical justification for the into two mutually orthogonal subspaces, the signal space spanðAÞ and the ⊥ concepts on which TRAP-MUSIC is based, and demonstrate its perfor- noise space spanðAÞ ; spanðYÞ and spanðAÞ refer to vector subspaces mance on measured MEG data. spanned by the columns of Y and A, respectively. Let P ¼ AAy, where y fi sg We argue that TRAP-MUSIC is an ef cient and useful tool for is the Moore–Penrose pseudoinverse, be the orthogonal projection from revealing the active brain from MEG/EEG data. Its improvements in ℝm data space onto spanðAÞ, i.e., it is the projection to the brain- performance come without any computational cost, and thus, it is suit- signal space. fl able for both online and of ine applications (for online use of RAP- MUSIC algorithms are usually divided into two categories: scalar MUSIC, see e.g., Dinh et al. (2012, 2014)). Essentially, TRAP-MUSIC MUSIC, which is used if the orientations of the test dipoles in the ROI are should be suitable for any application where RAP-MUSIC could be predetermined and fixed for each location in the forward model, and considered, and it may open new windows for MUSIC in analysis of vector MUSIC if the dipole orientations are not predetermined or known a MEG/EEG. priori (Mosher and Leahy, 1999; Sekihara and Nagarajan, 2008; Ham€ al€ ainen€ et al., 1993). In the fixed-orientation case, the orientation η Theory and methods is a function of location p, i.e., η ¼ ηðpÞ; consequently, the topographies lðp; ηÞ¼LðpÞηðpÞ are functions of only the location p. Description of the measurement data For simplicity, consider here scalar MUSIC. Scalar MUSIC is based on the following property of the signal-space projection Psg: for any p in the We assume that the (noiseless part of the) measurement data are ROI, PsglðpÞ¼lðpÞ if lðpÞ is one of the actual source topographies that generated by a finite number of cortical sources that can be modeled by a contributed to spanðAÞ, and jjPsglðpÞjj < jjlðpÞjj otherwise; k : k denotes set of equivalent current dipoles (Ham€ al€ ainen€ et al., 1993). = the Euclidean norm kxk2 ¼ðjx j2 þ … þjx j2Þ1 2 for any m-vector x. We denote a current dipole by a pair ðp; ηÞ, where p is the location and 1 m Consider next the data equation Eq. (1) with white noise ε, i.e., the unit vector η is the orientation of the dipole. The amplitude of a dipole εεT ¼ σ2I. Given the covariance matrix of the data C ¼ YYT, the separa- is denoted by s and so its dipole moment is sη. For m sensors, the sensor- tion to signal and noise spaces can be done by an eigenvalue decompo- readings vector, or topography, due to a unit-strength source is denoted sition of C as follows: by lðp; ηÞ. To every dipole location p we assign a local lead-field matrix LðpÞ 2 2 2 2 2 T C ¼ C0 þ σ I ¼ U diagmm d1 þ σ ; …; dn þ σ ; σ ; …; σ U ; (2) whose columns are lðp; exÞ, lðp; ey Þ, and lðp; ezÞ, where the orientations ð ; ηÞ¼ ð Þ η ex, ey and ez are the Cartesian unit vectors. It follows that l p L p . T where C0 ¼ UDU is the covariance matrix of the noiseless data, where U We assume that the measurement data, collected in the data matrix Y; is an m m orthonormal matrix, and d1 … dn > dnþ1 ¼ … ¼ dm ¼ 0 is obtained at N time-points and is due to an unknown, finite number n of are the diagonal elements of D, and σ is the noise level. Let source dipoles and additive noise. The amplitudes sij of the sources i ¼ Usg ¼ Uð:; 1 : nÞ. Here we use the Matlab notation, where Mði; : Þ and 1; …; n at time points t ; j ¼ 1; …; N are collected in a time-course matrix j Mð:; jÞ are the ith row and jth column of the matrix M, respectively. S, where row s of S is the time-course of source i. i Because, C ¼ UDUT, then spanðU Þ¼spanðAÞ, which is the signal The noisy measurement data collected by m > n sensors is described 0 sg space, and so the exact projection to the signal space is P ¼ U UT . by the linear model sg sg sg In practice, we do not know the true number n of the sources. In

Y ¼ Y0 þ ε ¼ AS þ ε; (1) theory, we could estimate n from the eigenvalue decomposition of the data covariance matrix C, by determining the index j after which the ¼½ð ; η Þ; …; ð ; η Þ ¼ fl where A l p1 1 l pn n is the mixing matrix, Y0 AS the eigenvalues dj drop and stay at, representing noise. However, it is often noiseless data matrix, and ε the noise matrix. The noise is assumed to be hard or even impossible to see a clear, reliable drop or plateau in this statistically independent of A and S. We call the noise white if its (not- eigenvalue spectrum, and hence, an overestimation of n has been rec- normalized) covariance matrix εεT is αI with α > 0 and identity matrix I; ommended to ensure that the true signal space definitely belongs to the otherwise it is colored. estimated signal space (Mosher and Leahy, 1999; Liu and Schimpf, 2006;

74 N. Makel€ a€ et al. NeuroImage 167 (2018) 73–83

¼ T Moiseev et al., 2011). MUSIC algorithms are therefore initialized with decomposition (SVD) of QkUs as QkUs UkDkVk , where Uk is an m m n~ > n in practice; this choice also determines the number of recursion … orthonormal matrix, and the singular values d1 dn~ are the diag- ¼ ð:; : ~Þ ¼ T steps. Let Us U 1 n . Then, Ps UsUs , instead of Psg, can be used to onal elements of Dk. With this SVD, we can compute the orthogonal ð Þ ð Þ identify the sources, similar to Psg. This is because span Psg is a subspace projection Pk onto span QkUs by ~ of spanðPsÞ and the n n extra dimensions of spanðPsÞ represent only noise. Accordingly, the localizer for the scalar MUSIC is given by X~n T T Pk ¼ ukjukj ¼ Ukð :; 1 : n~ÞUkð :; 1 : n~Þ ; (6) k ð Þk2 j¼1 μð Þ¼ Psl p ; p 2 (3) klðpÞk ~ where ukj ¼ Ukð:; jÞ and n n. For the kth recursion step, the scalar RAP- μ ð Þ μð Þ ; …; μð Þ MUSIC localizer k p is given by where p 1ifp is one of the true sources at p1 pn, and p < 1if there is no source at (or near) p. So, the source locations, corresponding 2 kPkQklðpÞk to the n largest local maxima of μðpÞ, can in principle be found by μkðpÞ¼ ; (7) k ð Þk2 computing μðpÞ for each candidate-source location in the scanning grid. Qkl p If an estimate Cε of the noise covariance is available, we can whiten and the vector RAP-MUSIC localizer by the data equation (1) by multiplying its both sides with (a reasonably 1=2 regularized) Cε . In MEG and EEG, the noise estimate can be obtained, 2 kPkQkLðpÞηk μkðpÞ¼max ; (8) for example, from the pre-stimulus part of the signal; in MEG, an empty- kηk¼ 2 1 kQkLðpÞηk room measurement could also be used to determine the measurement (sensor) noise. Therefore, unless otherwise stated, we assume that the where the closed forms for μ ðpÞ and its maximizer ηvecðpÞ are obtained data are either corrupted by (approximately) white noise, or the data k k with basic linear algebra, as explained in the previous section. have been whitened. We, however, note that given a sufficient SNR, The location pb is now the global maximum point of the localizer MUSIC methods also work in practice to some extent with colored noise. k μ ð Þ b If the forward model does not contain any constraints on the source k p . For the scalar RAP-MUSIC, the topography lk is given by μð Þ b ¼ ðb Þ b ¼ ðb Þηvecðb Þ orientations, the vector MUSIC is used; its localizer p is given by lk l pk , and for the vector RAP-MUSIC, by lk L pk k pk . Note that at each recursive step, RAP-MUSIC (Mosher and Leahy, kP LðpÞηk2 μðpÞ¼max s : (4) 1999) performs the same operation as the conventional MUSIC (Schmidt, kηk¼1 k LðpÞηk2 1986) but applied to the transformed data equation ð ; ηÞ¼ ð Þη ð Þ Because l p L p belongs to span A only for a true source at QkY ¼ QkAS þ Qkε ; (9) ¼ η ¼ η ; μð Þ p pk and ± k for some k 1 k n, we can reason that p 1if ; …; μð Þ p is one of the true sources p1 pn and p < 1, otherwise. The ori- which has the same form as the original data equation Eq. (1) with the entations of the sources can be determined in closed form: the maxima additional out-projector Qk applied to the left and right sides. μðpÞ and maximizer ηvec of the expression in Eq. (4) are derived by The recursive process is continued until all sources have been found, = making a change of variable z ¼ F1η with F ¼ðLðpÞTLðpÞÞ 1 2, and which should be indicated by a significantly reduced maximum value of T μ ð Þ ¼ ’ ’ μ observing that it turns the task to maximize a quadratic form z Kz for k p after k n steps; a plateau of low values should be observed in k T vec as function of k after n steps. The drop could then be used as a stopping kzk ¼ 1, with K ¼ FLðpÞ PsLðpÞF. With η ðpÞ, we can assign a topog- vec fi ~ raphy l ðpÞ¼LðpÞηvecðpÞ to every p in the ROI. This transforms vector rule or as a classi er separating n true and n n false sources for fi MUSIC to the scalar one with lðpÞ replaced by lvecðpÞ. recursive MUSIC algorithms. Some xed thresholds, e.g., 0.95 or 0.8 for μ ð Þ max k p have been suggested (Mosher and Leahy, 1999; Liu and Schimpf, 2006); using a fixed threshold has, however, been shown to be General concepts of recursive MUSIC algorithms very sensitive, for example, to the SNR and to the configuration of the sources, and hence, adaptive techniques have been suggested (Katyal and fl Here, we brie y introduce the basic concepts of recursive MUSIC, Schimpf, 2004; Cheyne et al., 2006). The drop, leading to a plateau, does using as an example the conventional RAP-MUSIC, in which the sources not, however, happen for RAP-MUSIC due to the RAP dilemma, as we are found one after another (Mosher and Leahy, 1999). At each recursion show in the following section. step, the topography of one source is projected out of the data and the forward model; the MUSIC algorithm is then applied to the transformed The RAP dilemma data equation. Essentially, RAP-MUSIC transforms the difficult problem of finding n local maxima of the localizer in one round into a much The conventional RAP-MUSIC (Mosher and Leahy, 1999) has the simpler problem of finding one global maximum at each sequen- unwanted property that in the recursive process, it leaves large residual tial round. values of the localizer μ ðpÞ, for example, in the vicinity of already-found The RAP-MUSIC process starts with a plain MUSIC scan step, which k sources. These residuals stem from preceding recursion steps due to gives the estimate of the first source location pb as the global maximum 1 imperfect out-projection of the true topographies that correspond to the μð Þ b b point of the localizer p . The topography l1 at p1 for scalar MUSIC is already-found sources; this RAP dilemma can degrade source estimation ðb Þ b ¼ ðb Þηvecðb Þ l p1 , and for vector MUSIC, l1 L p1 p1 . in the subsequent rounds. b ; …; b b ; …;b The origin of the RAP dilemma can be explained with the following After source locations p1 pk1 with topographies l1 lk1 have fi b b reasoning, applied on the data equation (9) and the RAP-MUSIC localizer been found, we nd pk and lk in the kth recursion step as follows. We (7) (or (8)). From Eq. (2) we see that spanðAÞ is a subspace of spanðUsÞ. form an orthogonal projection, the out-projector Qk,by Consider the kth recursion step of RAP-MUSIC, where l1; …; ln are the y b ; …; b b ; …;b Qk ¼ I BkBk ; (5) true topographies of the sources, while p1 pk1 and l1 lk1 are b the corresponding already-found locations and topographies with lj≃lj b b where B ¼½l ; …; l contains the topographies of the previously ¼ b ≠ b k 1 k 1 for j < k. Then 0 Qk lj Qklj because lj is not exactly equal to lj. ¼ b ≃ ð Þ found sources. Then 0 Qk lj Qklj for j < k. The transformed (approxi- On the other hand, lj belongs to the true signal space span A , which ð Þ ð Þ mated) signal space is span QkUs . Next we form the singular value is a subspace of span Us ; thus, Qklj belongs to the kth signal space

75 N. Makel€ a€ et al. NeuroImage 167 (2018) 73–83

ð Þ ¼ ð Þ estimate span QkUs . This implies that PkQklj Qklj with Pk as in Eq. (6), orthonormal basis of span QkUs . We see that the estimated signal- ð Þ¼ ð½ ; …; ; þ ; …; Þ and so subspace span QkUs span Qkl1 Qkln Qkwn 1 Qkwn~ . The out-projector Qk (Eq. (5)) makes the projections Qklj, j < k, of the already- 2 PkQk lj found topographies almost vanishing, while it affects the topographies μ p ¼ ¼ 1 (10) k j 2 ; …; Qk lj lk ln much less, given that the source topographies were originally linearly independent. Furthermore, assuming that the out-projection is μ ð Þ nearly accurate, these small residuals Q lj, j < k, are practically arbitrarily for already-found sources. So, the maximum of k p over p in the ROI is k equal to 1 and it is attained at all locations p , for j < k. This may lead to a oriented. Thus, the power of the residual is relatively small and the re- j ð Þ b μ ð Þ≃ siduals spread over all dimensions of span QkUs . At each recursion step wrong choice of pk, because k p 1 also in the neighborhoods of the μ ð Þ k, the dimension with the smallest singular value is due to the residuals, locations p , due to the continuity of k p . Note that the RAP dilemma j and consequently, removing this dimension from the signal space elim- exists even with the correct n~ ¼ n. inates the possibility that a large localizer value is attained solely due to In other words, the RAP-dilemma prevents RAP-MUSIC from cleaning the residuals. TRAP-MUSIC applies exactly this procedure in the pro- out the source information of previous recursion steps. This may lead to jection PTRAP, and hence, removes the RAP dilemma. spurious localizer maxima in the subsequent rounds. In particular, the k RAP dilemma hinders RAP-MUSIC from correctly estimating the number of the sources. Fortunately, this issue can be overcome by using Simulations TRAP-MUSIC. In this section, we describe the simulations that provide both illus- Truncated RAP-MUSIC (TRAP-MUSIC) trative examples and statistical evidence about the advantages of TRAP- MUSIC compared to RAP-MUSIC. The simulations and analysis of all data The core idea of TRAP-MUSIC is to apply a sequential dimension were done in Matlab (The Mathworks Inc., Natick, MA, USA). reduction of the estimated remaining signal space at each recursion step. The rationale for the sequential dimension reduction of the projection to The head model and the BEM solver the signal space is conceptually explained as follows. At each recursion The simulations were done using a realistically shaped head ge- step, one source is found and projected out, and hence, for the following ometry. We preprocessed and segmented the T1-weighted MRIs of a step, there is one source less in the remaining signal space. In other healthy subject (a 29-year-old right-handed male) with FreeSurfer words, after the kth step is completed, there are n k sources left to be (Fischl, 2012) and MNE software (Gramfort et al., 2014)toobtainthe found, and hence, it is reasonable to limit the dimension of the remaining brain, skull, and scalp surfaces; the surfaces were decimated to 5120 fl signal space to n~ k. triangles, smoothed and corrected for possible morphological aws using iso2mesh Matlab library (Fang and Boas, 2009). The conduc- TRAP-MUSIC differs from the conventional RAP-MUSIC in the way it ð Þ tivities of the scalp and brain were set to 0.33 (Ωm) 1, and to 1/50th approximates the transformed signal subspace span QkA at each recur- sion step. Analogously to RAP-MUSIC, we define the projection to the of that for the skull (Gonçalves et al., 2003; Dannhauer et al., 2011). ’ ’ transformed signal subspace at the kth recursion as The true source grid (i.e., the ROI) was a 2-D axial lattice at the depth of 4 cm from the top of the head (for similar approach, see (Moiseev ~nðXk1Þ et al., 2011)). It had a 2.8-mm spacing between neighboring source TRAP T T Pk ¼ ukjukj ¼ Ukð :; 1 : n~ ðk 1ÞÞUkð :; 1 : n~ ðk 1ÞÞ ; locations. The simulated sources were semi-randomly selected from j¼1 the source grid so that they were at least 3 cm from each other and at (11) least 4 cm away from the center of the ROI. The source dipoles were oriented along the z axis, i.e., upwards. The scanning grid with a where Uk is as in Eq. (6) and the corresponding scalar scanning function spacing of 2 mm was overlaid on the ROI so that the source points and is given by scanning grid points did not coincide. 60 EEG electrodes and 306 MEG sensors were simulated and co-registered with the head model to TRAP ð Þ2 Pk Qkl p represent realistic measurement settings. Finally, we computed the μTRAPðpÞ¼ ; (12) k 2 fi kQklðpÞk lead elds with a boundary-element method based on linear Galerkin boundary elements, formulated with the isolated-source approach and the vector TRAP-MUSIC localizer by (Ham€ al€ ainen€ and Sarvas, 1989; Stenroos and Sarvas, 2012). We applied the vector forms of the MUSIC algorithms in all analyses. The TRAP ð Þη2 TRAP Pk QkL p head model is visualized in Fig. 1. White noise was added to the μk ðpÞ¼max : (13) kηk¼ 2 simulated brain signals in Simulations 1–3. Due to the different do- 1 kQkLðpÞηk mains of the magnetometers and gradiometers, magnetometer noise The source orientations are determined as in RAP-MUSIC, but using TRAP Pk instead of Pk. Next, we will explain why TRAP-MUSIC localizers are free from the RAP dilemma.

Removal of the RAP dilemma with TRAP-MUSIC

The removal of the RAP dilemma can be explained as follows. Let us ~ first present the estimated signal space spanned by Us ¼ Uð:; 1 : nÞ as follows: ð Þ¼ ð½ ; …; ; ; …; Þ ; span Us span l1 ln wnþ1 wn~ (14)

~ Fig. 1. The head model used in simulations, with (A) 60 EEG and (B) 306 MEG sensors where wj ¼ Usð:; jÞ, with n þ 1 j n, are those columns of Us that are (102 units consisting of one planar magnetometer and two gradiometers), and (C) the 2-D solely due to noise. Let us consider the kth recursion step, keeping in axial ROI with an example set of 3 sources. The ROI visualization in (C) is from above, and ¼ T mind the SVD QkUs UkDkVk , where the columns of Uk form an left-hand side is on the left.

76 N. Makel€ a€ et al. NeuroImage 167 (2018) 73–83 was multiplied by a factor of 10 to obtain approximately even scaling The locations and the uncorrelated time-courses of n ¼ 3 sources were μ ð Þ for SNR in different sensors. modeled as in Simulations 1 and 2. The residual 2 p1 was computed for 1000 simulated EEG and 1000 simulated MEG data sets with SNRs of 10, Simulation 1: comparison of RAP-MUSIC vs. TRAP-MUSIC in an example 1, and 0.33. These simulations were performed for n~ ¼ 3 and n~ ¼ 5. case Here we demonstrated the performance of TRAP-MUSIC vs. RAP- Measured MEG data MUSIC in an example case of simulated MEG data. The task was to evaluate (1) how well the methods can estimate the number of sources We carried out an MEG experiment using multimodal sensory stimuli. and (2) how accurately the sources are located. The MEG data were obtained from the same subject who was used for In this simulation, n ¼ 3 and n~ ¼ 6 for both methods. The time creating the head model geometry in Simulations 1–3. The subject gave a courses were mixtures of sinusoids (between 10 and 30 Hz) and their written informed consent. The experiments were performed in accor- mutual correlations ranged from 0.1 to 0.3. The analysis time window dance with the Declaration of Helsinki, and they were approved by the was 1 s. The SNR of the data was set to 5, computed as the ratio of the Research Ethics Committee of Aalto University. Frobenius norms of the noiseless data and noise matrices. We also The subject was presented with unilateral somatosensory or visual repeated this simulation with uncorrelated sinusoidal time-courses for stimuli. Electrical stimulation of the right or left median nerve was used comparison. as the somatosensory input and flashes of black-and-white chessboard We used an adaptive approach to obtain a contrast marker between patterns to the lower right or lower left visual field as the visual input. true and false sources. We computed the difference between the These stimuli are well-established and their (primary) brain responses maximum localizer values in successive recursions; the largest drop in are known to be located in the contralateral somatosensory or visual this marker was used to estimate the number of sources. Geometrically cortices, respectively (e.g., Ahlfors et al. (1992); Mauguiere et al. (1997); this corresponds to a ’kink’ in the graph of maxpμ ðpÞ as function of k.We k Nakamura et al. (1998); Sharon et al. (2007)). The electrical stimulation used this criterion in all simulations and in the analysis of measured data. consisted of 0.2-ms rectangular pulses, delivered with an intensity slightly below the motor threshold. The visual stimuli were presented for Simulation 2: estimating the number of the sources 100 ms exclusively in left or right lower visual field at a time; they were We compared the ability of RAP- and TRAP-MUSIC to estimate the projected on a flat screen located at a distance of 100 cm from the subject. number and locations of the sources from a large set of simulated data The data were acquired with a 306-channel MEG scanner (Elekta with different properties. Neuromag, Helsinki, Finland) located in the Aalto Neuroimaging MEG With n ¼ 3, n~ was varied from n þ 1ton þ 6. For each n~, 100 MEG Core in Espoo, Finland, with a sampling frequency of 1000 Hz. For each and 100 EEG data sets were simulated for SNRs of 10, 2, 1, 0.9, 0.5, and stimulus type, 160 epochs were recorded in a randomized order with an 0.33. Source time-courses were modeled as in Simulation 1. Simulations inter-stimulus interval of 2.6–3s. were repeated also for uncorrelated sources. The statistical significance The data were bandpass-filtered offline to 2–80 Hz. Data for each of the comparison results was evaluated with Mann–Whitney U-tests with stimulus type were averaged over the epochs after a quality check of a significance level of 0.001 for a single comparison. channels and epochs. An analysis window of 10–100 ms with respect to the stimulus triggers (at time t ¼ 0) was used for the somatosensory data Simulation 3: residual sizes of RAP- and TRAP-MUSIC (the electrical stimulus artifact was excluded); a time window of 0–90 ms We demonstrated the severity of the RAP dilemma by showing that was used for the visual data. The time windows were equally long to the residuals are generated according to the theory introduced in Theory allow mixing these data. The basic Maxfilter pipeline, including signal- and Methods. This was done by computing the RAP-MUSIC localizer space separation (Taulu et al., 2004) and interpolation of bad channels, value according to Eq. (8) for the second recursion step at the exact was applied to the data with the Maxfilter software (Elekta Neuromag). location p of the source that was found at the first recursion step. That is, 1 As noted earlier, MUSIC is rather robust with respect to colored noise, given the first source estimate ðpb ; bη Þ, we computed the out-projection 1 1 and thus we chose to not whiten the data. Q and the signal-space projection P for the localizer, and computed it at 2 2 We used a realistic cortex geometry to analyze measured MEG data. the true source location p , i.e., μ ðp Þ. To ensure that the recursive 1 2 1 The cortex model was computed from the subject's MRIs using Freesurfer process actually removes the signals generated by the source at p , μ ðp Þ 1 2 1 and MNE software. The anatomical cortex model was reconstructed with should ideally be close to zero and in practice significantly smaller than a grid spacing of approximately 3.1 mm, resulting in 10242 grid points 1. Values close to one would imply that signals due to previously found per hemisphere. The lead fields for the anatomical cortex model were sources are left to the transformed data equation Eq. (9), which could computed without orientation constraint for the sources, using the same hinder source estimation in subsequent recursions. Therefore, the value methodology as in The head model and the BEM solver. μ ðp Þ can be considered as the ’revisit residual’ of the first source left at 2 1 In total, we analyzed 4 MEG data sets involving single sensory mo- p at the second recursion. We also computed μ ðp Þ for TRAP-MUSIC 1 2 1 dality and 4 data sets generated by mixing the two modalities. The single- (according to Eq. (13)), predicting that this value would be signifi- modality data were recorded during right or left median nerve stimula- cantly smaller than for RAP-MUSIC. Note that because the localizer value tion (RSS; LSS), or by right or left visual-field (RVF; LVF) stimulation. The is a continuous function of location, high localizer values are also present mixed data sets were produced by summing data from RSS, LSS, RVF, and in the vicinity of the already-found sources; this phenomenon is LVF conditions. These mixed data sets to be analyzed were RSS þ RVF, confirmed in Simulation 1. RSS þ LVF, LSS þ RVF, and LSS þ LVF. The number of sources were The revisit residual at the recursion step 2 of the source at p was 1 estimated adaptively as in Simulations 1–3. defined as

k ð Þηk2 Results μ ð Þ¼ P2Q2L p1 ; 2 p1 max 2 (15) kηk¼1 kQ Lðp Þηk 2 1 Simulations 2 T ¼ y ¼ b b b ¼ b ¼ ðb Þbη where Q2 I B2B2 I l1 l1 l1, B2 l1 L p1 1, and P2 is Simulation 1: comparison of TRAP-MUSIC and RAP-MUSIC in an example computed according to Eq. (6) for RAP-MUSIC and according to Eq. (11) case for TRAP-MUSIC. The point in the scanning grid closest to the true source The localization and n-estimation results for the example case are

77 N. Makel€ a€ et al. NeuroImage 167 (2018) 73–83 shown in Fig. 2. TRAP-MUSIC located all 3 sources accurately and esti- whereas TRAP-MUSIC gave a robust and correct estimate with high mated their number correctly. TRAP-MUSIC gave a high contrast be- success rate, independent on the initial estimate n~. ’ ’ ’ ’ μ ð Þ tween the true and false sources, i.e., between the maximum localizer The maximum localizer value maxp k p as function of the recursion values of the recursion steps 1–3 and 4–6 (see Fig. 2). In steps 1–3, high step k is shown in Fig. 4 for both EEG and MEG with different n~ and with localizer values indicated clearly the locations of the true sources; when a SNR ¼ 1. When the number of the recursion step k has reached the true source was found, its topography was effectively cleaned from the sub- number of sources (here, n ¼ 3), the localizer value should drop drasti- sequent rounds. This made locating the remaining sources easier and cally. This did not happen for RAP-MUSIC, for which the ’plateau’ was estimating the number of the sources clearer for both the algorithm and observed in EEG systematically later than at the step k ¼ 3 ¼ n, and not visual inspection. at all for MEG. For TRAP-MUSIC, the largest drop of the localizer value On the other hand, while RAP-MUSIC also located the 3 true sources was observed immediately after n ¼ 3, giving a strong contrast between accurately, it completely failed in estimating the number of the sources. true and false sources at the correct index (Fig. 4). High localizer values were left to the neighborhoods of the already-found sources, making the subsequent localizer values misleading. Therefore, Simulation 3: TRAP-MUSIC yields smaller residuals than RAP-MUSIC false positives occurred. The difference between the results with mildly Here we demonstrated the ’revisit residual’, i.e., the high localizer correlated and uncorrelated time-courses was negligible. values at p1 at the recursion 2, given by Eq. (15). The mean residual sizes from 1000 simulations for RAP-MUSIC and TRAP-MUSIC are shown in Simulation 2: TRAP-MUSIC accurately estimates the number of sources Fig. 5. RAP-MUSIC behaved as predicted by the RAP dilemma; it left large The success rates for estimating the number n of true sources and residuals, whereas TRAP-MUSIC was able to project out (most of) the mean source localization errors for RAP-MUSIC and TRAP-MUSIC are information related to the already-found topography, leaving only small shown in Fig. 3 for EEG and MEG data sets with n~ ¼ n þ 1; …; n þ 6 and residuals to the localizer. Results were similar with n~ ¼ n and n~ ¼ n þ 2. SNRs of 10, 5, 2, 1, 0.9, 0.5, and 0.33 in the case of mildly correlated sources. Results were similar with fully uncorrelated sources. Measured MEG data For the EEG data sets, TRAP-MUSIC had a 100 % success rate for all n~ ¼ and SNR conditions except for SNR 0.33, whereas the success rates of We applied TRAP-MUSIC on MEG data acquired during right/left RAP-MUSIC were always worse, and strongly dependent on the differ- somatosensory (RSS; LSS) or lower visual field (RVF; LVF) stimulation. ~ ence n n; with even a small overestimation, RAP-MUSIC failed Four recursive steps were carried out in TRAP-MUSIC for all analyzed completely to give the correct number of sources. There were no signif- data sets, i.e., n~ ¼ 4. The source localization results for the single- ~ icant differences in the mean source localization errors for any n values modality sensory stimuli are shown in Fig. 6. The primary responses between the methods, and the errors increased for both methods when were localized by TRAP-MUSIC to the left somatosensory cortex for RSS, the SNR decreased. The mean localization error was roughly 10 times to the right somatosensory cortex for LSS, to the left visual cortex for RVF, larger with the lowest compared to the highest SNR. Also TRAP-MUSIC and to the right visual cortex for LVF (see Fig. 6). SNR was between 1.5 fl started to ounder in estimating n with the very low SNR of 0.33, and 2.4 in all measured data sets, determined as the ratio of Frobenius which gave an approximate limit for the capability of the methods in our norms of the data matrix of the analysis time-window and the 100… simulation setting. 10 ms pre-stimulus noise matrix. For the simulated MEG data sets, TRAP-MUSIC had (almost) 100 % In addition to the single-modality data, TRAP-MUSIC was applied also ~ success rate independent of n n with all tested SNRs. With high SNRs, to the mixed multimodal sensory data for stimulus combinations RAP-MUSIC had an equally good success rate only in the case of RSS þ RVF, RSS þ LVF, LSS þ RVF, and LSS þ LVF. The localization ~ ¼ þ ~ n n 1, and the success rate decreased fast as function of n n > 1. results are shown in Fig. 7. TRAP-MUSIC was able to separate the so- With lower SNRs, the difference of success rates between RAP- and matosensory and visually evoked signals and locate them to their func- TRAP-MUSIC became smaller and eventually disappeared. tionally representative cortical areas, matching well with the activated RAP-MUSIC was not able to correctly estimate n in most situations, areas due to single modality stimulation cases, shown in Fig. 6. The

Fig. 2. Localization results of RAP-MUSIC vs. TRAP-MUSIC from a simulated MEG data with n ¼ 3 sources. The localizer values for each recursion step are visualized with a colormap on the ROI. True sources are marked with black crosses. RAP-MUSIC failed to distinguish between the true and false sources. The residuals blurred the vicinity of the already-found sources, making both automatic and visual interpretation of the scanning results difficult. TRAP-MUSIC successfully found all 3 sources and did not suggest extra ones; it gave a high contrast between the true and false sources. Both methods found the true sources equally well. Localizer maxima classified as ’source’ or ’no source’ by the algorithms are marked with red and white dots, respectively.

78 N. Makel€ a€ et al. NeuroImage 167 (2018) 73–83

Fig. 3. Success rates for estimating the number of sources, and mean source localization errors as functions of n~ n with several SNRs for simulated EEG (A and B) and MEG (C and D) data for RAP- (orange) and TRAP-MUSIC (blue). Note that the black error bars in (B) for the SNR ¼ 0.33 were partially cropped out. This was done to keep the scaling over SNR conditions fixed but visually reasonable; the error bars of the SNR ¼ 0.33 were trivial, as for that SNR, both methods essentially failed in localizing the sources in EEG.

μ ð Þ ~ Fig. 4. The maximum value of the localizer k p as function of the recursion step k for various values of n plotted for RAP- (orange) and TRAP-MUSIC (blue) in EEG (A) and MEG (B) with ¼ ¼ μ ð Þ SNR 1 and n 3. The curves are averages over 1000 data sets, and the error bars show the standard deviations. For TRAP-MUSIC, maxp k p dropped dramatically after the nth step. This did not happen with RAP-MUSIC. estimates for the number of sources were 2 for RSS þ RVF, 3 for (see Fig. 8), especially when the number of recursions was large (with RSS þ LVF, 3 for LSS þ RVF, and 2 for LSS þ LVF. n~ ¼ 6 or 8). For the sources classified as ’true’ by the TRAP-MUSIC, RAP-MUSIC yielded visually equal localization results, and had the same separation Discussion index for true and false sources as TRAP-MUSIC when 4 recursions were carried out. After these ’likely true’ sources, the methods gave somewhat We analyzed the RAP dilemma and solved it by introducing a novel different results, and the maximum value of the localizer for TRAP- source localization method, TRAP-MUSIC. With ’RAP dilemma’, we refer MUSIC seemed to decrease earlier and steeper than that of RAP-MUSIC to the property of the original RAP-MUSIC that prevents it from

79 N. Makel€ a€ et al. NeuroImage 167 (2018) 73–83

simulations makes the RAP-dilemma disappear. It seems that Mosher and Leahy (1999) may have used identical grids for simulating and esti- mating sources. We also tested the performance of RAP-MUSIC using a simulation setting with identical grids; indeed, RAP-MUSIC was then able to reliably separate the true and false sources, with a contrast almost as good as that of TRAP-MUSIC. Nevertheless, using identical grids is an inverse crime and yields unrealistic, biased simulations (Kaipio and Somersalo, 2007). Some recently introduced recursive MUSIC algorithms use the con- μ ð Þ ¼ Fig. 5. Mean revisit residuals 2 p1 , averaged over 1000 data sets with n 3, presented ventional RAP-MUSIC as a starting point and introduce improvements as for RAP-MUSIC and TRAP-MUSIC, for MEG (red) and EEG (green) data with SNRs of 10, 1, additional steps (Liu and Schimpf, 2006; Ewald et al., 2014; Shahbazi — — and 0.33. RAP-MUSIC left very high actually maximal residuals in the localizer value et al., 2015). Our improvement, however, is not implemented by adding μ ðp Þ, as predicted by the RAP dilemma, whereas TRAP-MUSIC had only small residuals. 2 1 it on top of the already existing method; instead, it corrects a technically small but crucial deficiency hidden within the original RAP-MUSIC, completely removing the topographies of the already-found sources, allowing more reliable and robust source estimation. It may be possible yielding large residuals to subsequent recursion steps, which hinders the that some issues of the RAP-dilemma area inherited by methods that use performance of the method. We showed that the RAP dilemma can be it as a starting point. overcome by a sequential dimension reduction of the signal-space pro- In Simulation 1, we demonstrated with simulated MEG data how jection. By simulations, we showed that TRAP-MUSIC is significantly RAP-MUSIC was unable to separate the true and false sources (see Fig. 2). more accurate and robust in estimating the number of brain-signal The out-projection of the information related to the already-found sources than RAP-MUSIC. When we applied TRAP-MUSIC to measured sources was incomplete, and it left high localizer values in the vicinity MEG data, it located sources to their well-known functional representa- of the already-found sources in the subsequent recursions. This made the tion areas in the cortex for data evoked by both single sensory input (e.g., localizer value distributions misleading, and hence, separating true and hand stimulus) and by multiple (e.g., hand þ visual stimulus) sen- false sources by automatic classification or by visual inspection practi- sory inputs. cally impossible. RAP-MUSIC often gave false positives, i.e., had a poor The original RAP-MUSIC holds potential as a simple and efficient positive predictive value. On the other hand, TRAP-MUSIC successfully source localization method, exploiting both the spatial and time domains found the true and only the true sources. TRAP-MUSIC effectively ’wipes to solve the inverse problem. It showed, however, to have a hidden out’ the information related to preceding source estimates, and thus deficiency preventing optimal performance. One potential reason for this ensures that subsequent rounds are free from the residual artifacts. It is deficiency remaining undiscovered thus far is that using identical grids worth noticing that the unwanted residuals due to RAP-MUSIC appear and identical forward models for creating and scanning sources in not only in the vicinity of the estimated sources, but also several

Fig. 6. Localization results of the measured sensory-evoked MEG data for the single-modality data analyzed with TRAP-MUSIC. Four datasets with either right/left somatosensory or visual-field stimulation (RSS; LSS; RVF; LVF) were analyzed. Four recursion (k ¼ 1; …; 4) were carried out for each data set. The localizer values above 0.75 are visualized with a ’hot’ colormap, and the localizer maxima for each recursion are marked with black dots. Note that the inflated cortices are shown either from axial or coronal orientation.

80 N. Makel€ a€ et al. NeuroImage 167 (2018) 73–83

Fig. 7. TRAP-MUSIC localization results of the multisensory-evoked MEG data for the mixed modalities of right/left somatosensory (SS) þ right/left visual-field (VF) stimulation: RSS þ RVF, RSS þ LVF, LSS þ RVF, LSS þ LVF. Four recursions (k ¼ 1; …; 4) were carried out for each data set. The localizer values above 0.75 are visualized with a ’hot’ colormap, and the localizer maxima for each recursion are marked with black dots. Note that the inflated cortices are shown either from axial or coronal orientation.

localizer value as function of the recursion step k (Fig. 4). It was evident that the TRAP-MUSIC localizer value dropped when the true number of sources had been found, i.e, after k ¼ n steps (see Fig. 4), whereas the drop was observed later—if at all—for RAP-MUSIC. Thus, TRAP-MUSIC gave a strong contrast between the true and false sources and RAP-MUSIC did not. This illustrates the reason why the success rates of TRAP-MUSIC were so much higher than those of RAP-MUSIC. The residuals predicted by the RAP dilemma were very pronounced in the idealistic cases of Simulation 3; RAP-MUSIC gave large—actually — μ ð Þ¼ maximal localizer values, i.e., revisit residuals 2 p1 1, when that Fig. 8. The maximum value of the localizer as function of the recursion step for RAP- value should have been ≪1 (see Simulation 3 and Fig. 5 for details). On MUSIC (orange) and TRAP-MUSIC (blue) for three real MEG data sets: (A) right-visual field (RVF; dashed line), (B) left somatosensory (LSS; dotted dashed line), and (C) the other hand, TRAP-MUSIC yielded small residuals, suggesting that in RVF þ LSS (solid line). Both methods found the primary responses at their presumed subsequent rounds, the already-found sources should not affect the cortical areas (see Fig. 6 and 7.) with highest localizer values. estimation of the remaining sources. It follows that the recursive global- maximum search is more likely to proceed to a new source point, and centimeters away from them. TRAP-MUSIC could be particularly useful eventually, give very low localizer values indicating that all sources have in applications with poor spatial resolution, e.g., in (online) EEG appli- been found. Note that as the localizer of RAP-MUSIC (and TRAP-MUSIC) cations that use only a small number of electrodes. is a continuous function of location, the residuals are not only present at In Simulation 2, we compared RAP- and TRAP-MUSIC in a wide range the true source position, but also in the vicinity (see Fig. 2). of different source configurations, SNRs, and numbers of recursions in In practice, it is often impossible to know the true number n of the EEG and MEG. The results showed that the success rates in estimating n dominating sources that generate the data. Therefore, one needs to were high in almost all tested SNRs and values of n~ for TRAP-MUSIC, ensure that at least n recursions are run (i.e., n~ > n) to ensure that the whereas RAP-MUSIC failed drastically in evaluating n for almost all dimension of the applied signal space is at least as large as that of the true SNRs and n~. The advantages of TRAP-MUSIC were most pronounced with signal space (see Methods or, e.g., Mosher and Leahy (1999)). Thus, there moderate to high SNRs that are often available with evoked (and aver- is always a need for separating ’true’ and ’false’ source estimates. The key aged) data. TRAP-MUSIC outperformed RAP-MUSIC in estimating n in factor in the performance of TRAP-MUSIC is its ability to do that sepa- both EEG and MEG, while keeping the localization accuracy of the true ration. With TRAP-MUSIC, it should be safe to give a large n~ at the sources similar to that of RAP-MUSIC. With the lowest SNRs, the success- initialization step of the algorithm, as it gives a robust and clear contrast rate differences in the estimation of n became smaller, although TRAP- between true and false positives. Note that setting the threshold for the MUSIC was still better in most cases and never worse than RAP- separation of ’true’ and ’false’ sources can be done either with a fixed MUSIC. Evidently, TRAP-MUSIC is able to perform well in a wide threshold value or adaptively. We used an adaptive approach, because a range of SNRs. fixed threshold, for example μ ¼ 0:95 or 0.8 for separating the true and In Simulation 2, we also studied the evolution of the maximum false sources, has been shown to be insufficient, and a generally suitable

81 N. Makel€ a€ et al. NeuroImage 167 (2018) 73–83 threshold value cannot be used for data with, e.g., different SNRs or with RAP- and TRAP-MUSIC with MEG with highly correlated (correlations different source and sensor configurations (Katyal and Schimpf, 2004; between 0.7 and 0.9) and synchronous sources; the methods could Cheyne et al., 2006; Liu and Schimpf, 2006). indeed tolerate high correlations, given a sufficient SNR, but failed when In this study, we used a fixed number of three true sources in the synchronous sources were present. The success rates of the methods with simulations. This was done for simplicity, to keep the parameter space data containing highly correlated sources were qualitatively as in Fig. 3, reasonably limited. However, given a sufficient signal quality, TRAP- but decreased faster as function of SNR. A brief description of the effects MUSIC can be used to locate a larger number of sources. We evaluated of temporal correlations and linear dependence of source time-courses on the performance of TRAP-MUSIC with several different values of n in the the visibility of such signal sources in a topographical scanning that ap- range 5–12, and the results were qualitatively very similar. We roughly plies data covariance matrix, like beamformers and MUSIC, has been evaluated the maximum number of sources that could be successfully recently presented by Makel€ a€ et al. (2017). Special techniques can be located in our simulation setting in the case of SNR ¼ 1; the tentative used for analyzing highly correlated or synchronous sources (e.g., see estimates were n ¼ 7 for EEG and n ¼ 9 for MEG. (Diwakar et al., 2011; Brookes et al., 2007)), but this was out of the scope To validate our TRAP-MUSIC algorithm, we used it to analyze of this study. measured MEG data evoked by stimulation of the right/left median nerve fi or right/left lower visual eld. TRAP-MUSIC successfully located the Conclusion primary responses to their well-known functional representation areas, i.e., to the contralateral somatosensory cortex for median nerve stimu- We introduced the TRAP-MUSIC method, which provides a solution fi lation and to contralateral visual cortex for visual- eld stimulation. to the RAP dilemma, a hidden limitation that we found in RAP-MUSIC. TRAP-MUSIC did not suggest additional spurious sources outside these TRAP-MUSIC was successfully applied in source estimation with simu- areas, and the scanning function values dropped quickly after two or lated EEG and MEG and measured MEG data. We argue that TRAP- three recursions for each condition (see Fig. 6). This shows that TRAP- MUSIC is an efficient and robust tool for locating multiple sources; the MUSIC is suitable for source analysis from typical sensory evoked MEG method is suitable even for real-time analysis. data. In addition, we used TRAP-MUSIC to locate sources from multisensory-evoked MEG. TRAP-MUSIC successfully separated and Acknowledgments located brain activity from artificially mixed somatosensory- and visually-evoked data (Fig. 7). This suggests that TRAP-MUSIC indeed has This study was funded by Academy of Finland (grant number the ability to separate brain activity related to different functional pro- 283105), Foundation for Aalto University Science and Technology, and cesses, that is, it works as a MUltiple SIgnal Classifier. Oskar O€flund Foundation. We also compared TRAP-MUSIC with RAP-MUSIC in the analysis of measured MEG data. Both methods located the primary responses to their corresponding functional areas from both single and mixed sensory References modality data, giving essentially equal results for those few first re- Ahlfors, S.P., Ilmoniemi, R.J., Ham€ al€ ainen,€ M.S., 1992. Estimates of visually evoked sponses. After the (primary) responses were located as expected at visual cortical currents. Electroencephalogr. Clin. Neurophysiol. 82 (3), 225–236. and/or somatosensory cortices, the methods showed different (possibly Baker, A.P., Brookes, M.J., Rezek, I.A., Smith, S.M., Behrens, T., Smith, P.J.P., false) sources. In general, the maximal localizer value for TRAP-MUSIC Woolrich, M., 2014. Fast transient networks in spontaneous human brain activity. decreased faster and steeper than for RAP-MUSIC (see Fig. 8), although Elife 3, e01867. Birbaumer, N., Murguialday, A.R., Weber, C., Montoya, P., 2009. Neurofeedback and the cut-off index was not as clearly visible as in Simulation 2 (see Fig. 4). brain–computer interface: clinical applications. Int. Rev. Neurobiol. 86, 107–117. As both the theory and our simulations suggest, RAP-MUSIC and TRAP- Birot, G., Albera, L., Wendling, F., Merlet, I., 2011. Localization of extended brain sources – MUSIC should locate the true sources equally well. Therefore, one could from EEG/MEG: the ExSo-MUSIC approach. NeuroImage 56 (1), 102 113. Brookes, M.J., Stevenson, C.M., Barnes, G.R., Hillebrand, A., Simpson, M.I., Francis, S.T., possibly enhance the reliability of a MUSIC scan by applying both RAP- Morris, P.G., 2007. Beamformer reconstruction of correlated sources using a modified MUSIC and TRAP-MUSIC, and accept sources that both methods classi- source model. NeuroImage 34 (4), 1454–1465. fied as sources. That is, take the estimate of the number of sources to be Cheyne, D., Bakhtazad, L., Gaetz, W., 2006. Spatiotemporal mapping of cortical activity accompanying voluntary movements using an event-related beamforming approach. the recursion step index where the two methods diverge. These tentative Hum. Brain Mapp. 27 (3), 213–229. suggestions need, however, further study. Dannhauer, M., Lanfer, B., Wolters, C.H., Knosche,€ T.R., 2011. Modeling of the human TRAP-MUSIC is computationally efficient, similarly to RAP-MUSIC skull in EEG source analysis. Hum. Brain Mapp. 32 (9), 1383–1399. Dinh, C., Strohmeier, D., Esch, L., Guellmar, D., Baumgarten, D., Ham€ al€ ainen,€ M., (Mosher and Leahy, 1999). RAP-MUSIC has already been used in Haueisen, J., 2014. Real-time single-trial source localization using RAP-MUSIC and real-time applications (Dinh et al., 2012, 2014). As the improvement of region of interest clustering. Biomed. Engineering/Biomedizinische Tech. 59. TRAP-MUSIC comes without any computational cost, it should be useful S949–S949. Dinh, C., Strohmeier, D., Haueisen, J., Güllmar, D., 2012. Brain atlas based region of in online applications as well. In our simulations performed with a interest selection for real-time source localization using K-means lead field clustering normal PC, a vector TRAP-MUSIC run took some fractions of a second, and RAP-MUSIC. Biomed. Engineering/Biomedizinische Tech. 57 (SI-1 Track-O), e.g., 0.1–0.9 s, depending on parameters such as the scanning grid size, 813–813. number of recursions and the number of time-points. It is possible to Dinh, C., Strohmeier, D., Luessi, M., Güllmar, D., Baumgarten, D., Haueisen, J., Ham€ al€ ainen,€ M.S., 2015. Real-time MEG source localization using regional clustering. further speed up the performance of TRAP-MUSIC, for example, by using Brain Topogr. 28 (6), 771–784. its scalar version (in Eq. (12)), by applying optimum orientations (Vrba Diwakar, M., Tal, O., Liu, T.T., Harrington, D.L., Srinivasan, R., Muzzatti, L., Song, T., and Robinson, 2000; Sekihara and Nagarajan, 2008), or by scanning with Theilmann, R.J., Lee, R.R., Huang, M.-X., 2011. Accurate reconstruction of temporal correlation for neuronal sources using the enhanced dual-core MEG beamformer. fewer, regionally clustered lead fields (Dinh et al., 2015). Such optimi- NeuroImage 56 (4), 1918–1928. zation would allow the use of TRAP-MUSIC in real-time applications, Ewald, A., Avarvand, F.S., Nolte, G., 2014. Wedge MUSIC: a novel approach to examine such as closed-loop EEG systems. experimental differences of brain source connectivity patterns from EEG/MEG data. NeuroImage 101, 610–624. It is worth noticing that the form of the time-courses (e.g., whether Fang, Q., Boas, D.A., 2009. Tetrahedral mesh generation from volumetric binary and they are sinusoidal, transient or random noise) does not affect the per- grayscale images. In: Biomedical Imaging: from Nano to Macro, 2009. ISBI’09. IEEE formance of MUSIC algorithms per se. The actual main input of MUSIC International Symposium on. IEEE, pp. 1142–1145. Fischl, B., 2012. Freesurfer. NeuroImage 62 (2), 774–781. algorithms is the data covariance matrix, and the time-sample order Gonçalves, S.I., de Munck, J.C., Verbunt, J.P., Bijma, F., Heethaar, R.M., da Silva, F.L., could be changed without changing the localization result at all. How- 2003. In vivo measurement of the brain and skull resistivities using an eit-based ever, the source time-courses affect the localization results in the way method and realistic models for the head. IEEE Trans. Biomed. Eng. 50 (6), 754–767. Gramfort, A., Luessi, M., Larson, E., Engemann, D.A., Strohmeier, D., Brodbeck, C., that if the time-courses are linearly dependent or correlated, this in- Parkkonen, L., Ham€ al€ ainen,€ M.S., 2014. MNE software for processing MEG and EEG fluences the covariance matrix. We briefly tested the performance of data. NeuroImage 86, 446–460.

82 N. Makel€ a€ et al. NeuroImage 167 (2018) 73–83

Groß, J., Kujala, J., Ham€ al€ ainen,€ M., Timmermann, L., Schnitzler, A., Salmelin, R., 2001. Mosher, J.C., Leahy, R.M., 1999. Source localization using recursively applied and Dynamic imaging of coherent sources: studying neural interactions in the human projected (RAP) MUSIC. Signal Process. IEEE Trans. 47 (2), 332–340. brain. Proc. Natl. Acad. Sci. 98 (2), 694–699. Mosher, J.C., Leahy, R.M., Lewis, P.S., 1999. EEG and MEG: forward solutions for inverse Ham€ al€ ainen,€ M., Hari, R., Ilmoniemi, R.J., Knuutila, J., Lounasmaa, O.V., 1993. methods. Biomed. Eng. IEEE Trans. 46 (3), 245–259. Magnetoencephalography—theory, instrumentation, and applications to noninvasive Nakamura, A., Yamada, T., Goto, A., Kato, T., Ito, K., Abe, Y., Kachi, T., Kakigi, R., 1998. studies of the working human brain. Rev. Mod. Phys. 65 (2), 413–496. Somatosensory homunculus as drawn by MEG. NeuroImage 7 (4), 377–386. Ham€ al€ ainen,€ M.S., Sarvas, J., 1989. Realistic conductivity geometry model of the human Pascarella, A., Sorrentino, A., Campi, C., Piana, M., 2010. Particle filtering, beamforming head for interpretation of neuromagnetic data. Biomed. Eng. IEEE Trans. 36 (2), and RAP-MUSIC in the analysis of magnetoencephalography time series: a 165–171. comparison of algorithms. Inverse Probl. Imag. 4, 169–190. Kaipio, J., Somersalo, E., 2007. Statistical inverse problems: discretization, model Sanchez, G., Daunizeau, J., Maby, E., Bertrand, O., Bompas, A., Mattout, J., 2014. Toward reduction and inverse crimes. J. Comput. Appl. Math. 198 (2), 493–504. a new application of real-time electrophysiology: online optimization of cognitive Katyal, B., Schimpf, P.H., 2004. Multiple current dipole estimation in a realistic head neurosciences hypothesis testing. Brain Sci. 4 (1), 49–72. model using R-MUSIC. In: Engineering in Medicine and Biology Society, 2004. Schmidt, R.O., 1986. Multiple emitter location and signal parameter estimation. Antenn. IEMBS’04. 26th Annual International Conference of the IEEE, vol. 1. IEEE, Propag. IEEE Trans. 34 (3), 276–280. pp. 829–832. Sekihara, K., Nagarajan, S.S., 2008. Adaptive Spatial Filters for Electromagnetic Brain Liu, H., Schimpf, P.H., 2006. Efficient localization of synchronous EEG source activities Imaging. Springer Science & Business Media. using a modified RAP-MUSIC algorithm. Biomed. Eng. IEEE Trans. 53 (4), 652–661. Shahbazi, F., Ewald, A., Nolte, G., 2015. Self-consistent MUSIC: an approach to the Makel€ a,€ N., Sarvas, J., Ilmoniemi, R.J., 2017. A simple reason why beamformer may (not) localization of true brain interactions from EEG/MEG data. NeuroImage 112, remove the tACS-induced artifact in MEG (in Proceedings, Neuromodulation NYC 299–309. 2017 Conference). Brain Stimul. 10 (4), e66–e67. Sharon, D., Ham€ al€ ainen,€ M.S., Tootell, R.B., Halgren, E., Belliveau, J.W., 2007. The Mauguiere, F., Merlet, I., Forss, N., Vanni, S., Jousmaki,€ V., Adeleine, P., Hari, R., 1997. advantage of combining MEG and EEG: comparison to fMRI in focally stimulated Activation of a distributed somatosensory cortical network in the human brain. a visual cortex. NeuroImage 36 (4), 1225–1235. dipole modelling study of magnetic fields evoked by median nerve stimulation. part I: Stenroos, M., Sarvas, J., 2012. Bioelectromagnetic forward problem: isolated source location and activation timing of SEF sources. Electroencephalogr. Clin. approach revis(it)ed. Phys. Med. Biol. 57 (11), 3517–3535. Neurophysiol./Evoked Potentials Sect. 104 (4), 281–289. Taulu, S., Kajola, M., Simola, J., 2004. Suppression of interference and artifacts by the Moiseev, A., Gaspar, J.M., Schneider, J.A., Herdman, A.T., 2011. Application of multi- signal space separation method. Brain Topogr. 16 (4), 269–275. source minimum variance beamformers for reconstruction of correlated neural Vrba, J., Robinson, S.E., 2000. Linearly constrained minimum variance beamformers, activity. NeuroImage 58 (2), 481–496. synthetic aperture magnetometry, and MUSIC in MEG applications. In: Conference Mosher, J.C., Leahy, R.M., 1998. Recursive MUSIC: a framework for EEG and MEG source Record of the Thirty-fourth Asilomar Conference on Signals, Systems and Computers, localization. Biomed. Eng. IEEE Trans. 45 (11), 1342–1354. vol. 1. IEEE, pp. 313–317.

83