Survey of Sparse and Non-Sparse Methods in Source Separation
Total Page:16
File Type:pdf, Size:1020Kb
Survey of Sparse and Non-Sparse Methods in Source Separation Paul D. O’Grady,1 Barak A. Pearlmutter,1 Scott T. Rickard2 1 Hamilton Institute, National University of Ireland Maynooth, Co. Kildare, Ireland. Ollscoil na hE´ ireann Ma´ Nuad, E´ ire 2 University College Dublin, Ireland. An Cola´ iste Ollscoile, Baile A´ tha Cliath, E´ ire Received 20 September 2004; accepted 4 April 2005 ABSTRACT: Source separation arises in a variety of signal process- influences the complexity of source separation. If M ¼ N, the mixing ing applications, ranging from speech processing to medical image process A is defined by an even-determined (i.e. square) matrix and, analysis. The separation of a superposition of multiple signals is provided that it is non-singular, the underlying sources can be esti- accomplished by taking into account the structure of the mixing proc- mated by a linear transformation. If M > N, the mixing process A is ess and by making assumptions about the sources. When the infor- defined by an over-determined matrix and, provided that it is full mation about the mixing process and sources is limited, the problem is called ‘‘blind’. By assuming that the sources can be represented rank, the underlying sources can be estimated by least-squares opti- sparsely in a given basis, recent research has demonstrated that sol- misation or linear transformation involving matrix pseudo-inver- utions to previously problematic blind source separation problems sion. If M < N, then the mixing process is defined by an under-deter- can be obtained. In some cases, solutions are possible to problems mined matrix and consequently source estimation becomes more intractable by previous non-sparse methods. Indeed, sparse meth- involved and is usually achieved by some non-linear technique. ods provide a powerful approach to the separation of linear mixtures Environmental assumptions about the surroundings in which the of independent data. This paper surveys the recent arrival of sparse sensor observations are made also influence the complexity of the blind source separation methods and the previously existing non- problem. BSS of acoustic signals is often referred to as the cocktail sparse methods, providing insights and appropriate hooks into the- party problem (Cherry, 1953); that is the separation of individual literature along the way. VC 2005 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 15;18–33;2005; Published online in Wiley InterScience voices from a myriad of voices in an uncontrolled acoustic environ- (www.interscience.wiley.com). DOI 10.1002/ima.20035 ment such as a cocktail party. Sensor observations in a natural envi- ronment are confounded by signal reverberations, and conse- Key words: Blind Sources Separation; sparse methods; Non- quently, the estimated unmixing process needs to identify a source negative Matrix Factorization arriving from multiple directions at different times as one individual source. Generally, BSS techniques depart from this difficult real- I. INTRODUCTION world scenario and make less realistic assumptions about the envi- When presented with a set of observations from sensors such as ronment so as to make the problem more tractable. There are typi- microphones, the process of extracting the underlying sources is cally three assumptions that are made about environment. The most called source separation. Doing so without strong additional infor- rudimentary of these is the instantaneous case, where sources arrive mation about the individual sources or constraints on the mixing instantly at the sensors but with differing signal intensity. An exten- process is called blind source separation (BSS). Generally the prob- sion of the previous assumption, where arrival delays between sen- lem is stated as follows: given M linear mixtures of N sources mixed sors are also considered, is known as the anechoic case. The via an unknown M Â N mixing matrix A, estimate the underlying anechoic case can be further extended by considering multiple paths sources from the mixtures. When M N, this can be achieved by between each source and each sensor, which results in the echoic À1 constructing an unmixing matrix W, where W ¼ A up to permu- case, sometimes known as convolutional mixing. Each case can be tation and scaling of the rows. The conditions that must be satisfied extended to incorporate linear additive noise, which is usually to guarantee separation are given by Darmois’ Theorem (Darmois, assumed to be white and Gaussian. 1953), which states that the sources must be non-Gaussian and stat- Assumptions can also be made about the nature of the sources. istically independent. The dimensionality of the mixing process Such assumptions form the basis for most source separation algo- rithms and include statistical properties such as independence and Correspondence to: Paul D. O’Grady; e-mail: [email protected] Supported by Higher Education Authority of Ireland (An tu´dara´s Um Ard- stationarity. One increasingly popular and powerful assumption is Oideachas) and Science Foundation Ireland grant 00/PI.1/C067. that the sources have a parsimonious representation in a given basis. ' 2005 Wiley Periodicals, Inc. These methods have come to be known as sparse methods. A signal is said to be sparse when it is zero or nearly zero more than might be expected from its variance. Such a signal has a probability den- sity function or distribution of values with a sharp peak at zero and fat tails. This shape can be contrasted with a Gaussian distribution, which would have a smaller peak and tails that taper quite rapidly. A standard sparse distribution is the Laplacian distribution À (p(c) ! e |c|), which has led to the sparseness assumption being -Norm minimisation -Norm minimisation sometimes referred to as a Laplacian prior. The advantage of a 1 1 L L sparse signal representation is that the probability of two or more sources being simultaneously active is low. Thus, sparse representa- tions lend themselves to good separability because most of the energy in a basis coefficient at any time instant belongs to a single source. This statistical property of the sources results in a nicely defined structure being imposed by the mixing process on the resul- tant mixtures, which can be exploited to make estimating the mix- ing process much easier. Additionally, sparsity can be used in many instances to perform source separation in the case when there are more sources than sensors. A sparse representation of an acoustic Power spectrum reshapingIndependence maximisation Linear filtering Adaptive feedback network Joint unitary diagonalisation Linear transformation Hough transformGradient ascent learning2D Histogram clusteringPotential function clustering MAP with Hard Laplacian assignment prior Maximum a posteriori Binary time-frequency masking Joint optimisation Known HRTF and dictionary Joint cumulant diagonalisationMutual Linear information transformation minimisation Linear transformation Non-Gaussianity maximisation Linear transformation Hidden markov models Spectral masking and filtering Entropy maximisationMultichannel blind deconvolution Current estimate feedback Linear transformation signal can often be achieved by a transformation into a Fourier, Gabor or Wavelet basis. The sparse representation of an acoustic signal has an interpreta- tion in information theoretic terms, where the representation of a signal using a small number of coefficients corresponds to transmis- sion of information using a code utilising a small number of bits. Sparse representation of information is a phenomenon that also occurs in the natural world. In the brain, neurons are said to encode data in a sparse way, if their firing pattern is characterised by long periods of inactivity (Fo¨ldia´k and Young, 1995; Ko¨rding et al., 2002), and recent work (DeWeese et al., 2003) indicates that sparse representations exist in the auditory cortex. Blind source separation has been studied for nearly two decades. The earliest approach traces back to Herault and Jutten (1986) whose goal was to separate an instantaneous linear even-determined mixture of non-Gaussian independent sources. They proposed a sol- ution that used a recurrent artificial neural network to separate the unknown sources, and the crucial assumption being that the under- ) Mixing Model Representation Unmixing 2 Inst. Anechoic Echoic Time Freq. Signal Dict. Parameter Est. Separation M lying signals were independent. This early work led to the pioneer- > ing adaptive algorithm of Jutten and Herault (1991). Linsker (1989) proposed unsupervised learning rules based on information theory that maximise the average mutual information between the inputs and outputs of an artificial neural network. Comon (1994) proposed 12 that mutual information was the most natural measure of independ- ence and showed that maximising the non-Gaussianity of the source ) No. of Sensors ( N signals was equivalent to minimising the mutual information M ÂÂ ÂÂÂ Â ÂÂ ÂÂ ÂÂ Â Â > between them. He also dubbed the concept of determining underly- ing sources by maximising independence, independent component analysis (ICA). Bell and Sejnowski (1995) developed a BSS algo- ÂÂÂÂÂ ÂÂÂÂÂ ÂÂ ÂÂÂ ÂÂÂÂÂ ÂÂÂÂÂ Â ÂÂÂ ÂÂ Â ÂÂÂ Â Â Â Â ÂÂÂÂÂ ÂÂÂÂÂ ÂÂÂÂÂ ÂÂÂÂÂ ÂÂÂÂÂ M rithm called BS-Infomax, which is similar in spirit to that of Linsker No. of Sources ( and uses an elegant stochastic gradient learning rule that was pro- a posed by Amari et al. (1996). The idea of non-Gaussianity of sour- ces was used by Hyva¨rinen and Oja (1997) to develop their fICA a a a algorithm. As an alternative approach to separation using mutual information, Gaeta and Lacoume (1990) proposed maximum likeli- hood estimation, an approach elaborated by Pham et al. (1992), although, Pearlmutter and Parra (1996) and Cardoso (1997) later a a demonstrated that the BS-Infomax algorithm and maximum likeli- rinen and Oja, 1997) ¨ hood estimation are essentially equivalent. The early years of BSS A Comparison of Various Source Separation Techniques. research concentrated on solutions for even-determined and over- Sparse methods. a determined mixing processes. It was not until recent years that a Souloumiac, 1993) Sejnowski, 1995) Table I.