An Introduction to MM Algorithms for Machine Learning and Statistical
Total Page:16
File Type:pdf, Size:1020Kb
An Introduction to MM Algorithms for Machine Learning and Statistical Estimation Hien D. Nguyen November 12, 2016 School of Mathematics and Physics, University of Queensland, St. Lucia. Centre for Advanced Imaging, University of Queensland, St. Lucia. Abstract MM (majorization–minimization) algorithms are an increasingly pop- ular tool for solving optimization problems in machine learning and sta- tistical estimation. This article introduces the MM algorithm framework in general and via three popular example applications: Gaussian mix- ture regressions, multinomial logistic regressions, and support vector ma- chines. Specific algorithms for the three examples are derived and numer- ical demonstrations are presented. Theoretical and practical aspects of MM algorithm design are discussed. arXiv:1611.03969v1 [stat.CO] 12 Nov 2016 1 Introduction Let X X Rp and Y Y Rq be random variables, which we shall refer to ∈ ⊂ ∈ ⊂ as the input and target variables, respectively. We shall denote a sample of n in- n dependent and identically distributed (IID) pairs of variables D = (Xi, Yi) { }i=1 1 ¯ n as the data, and D = (xi, yi) as an observed realization of the data. Under { }i=1 the empirical risk minimization (ERM) framework of Vapnik (1998, Ch. 1) or the extremum estimation (EE) framework of Amemiya (1985, Ch. 4), a large number of machine learning and statistical estimation problems can be phrased as the computation of min θ; D¯ or θˆ = arg min θ; D¯ , (1) θ Θ R θ Θ R ∈ ∈ where θ; D¯ is a risk function defined over the observed data D¯ and is de- R pendent on some parameter θ Θ. ∈ Common risk functions that are used in practice are the negative log-likelihood functions, which can be expressed as n ¯ 1 θ; D = log f (xi, yi; θ) , R −n i=1 X where f (x, y; θ) is a density function over the support of X and Y , which takes parameter θ. The minimization of the risk in this case yields the maximum likelihood (ML) estimate for the data D¯, given the parametric family f. Another common risk function is the d -norm difference between the target variable and |·| some function f of the input: n ¯ 1 d θ; D = yi f (xi; θ) , R n | − | i=1 X where d [1, 2], yi R, and f (x; θ) is some predictive function of yi with ∈ ∈ parameter θ that takes x as an input. Setting d = 1 and d = 2 yield the common least-absolute deviation and least-squares criteria, respectively. Furthermore, taking f (x; θ) = θ>x and d = 2 simultaneously yields the classical ordinary least-squares criterion. Here Θ Rp and the superscript indicates matrix ⊂ > 2 transposition. When Y = 1, 1 , a common problem in machine learning is to construct {− } a classification function f (xi; θ) that minimizes the classification (0-1) risk n ¯ 1 θ; D = I yi = f (xi; θ) , R n { 6 } i=1 X where f : X Y and I A = 1 if proposition A is true and 0 otherwise. Unfor- → { } tunately, the form of the classification risk is combinatorial and thus necessitates the use of surrogate classification risks of the form n ¯ 1 θ; D = ψ (xi, yi, f (xi; θ)) , R n i=1 X 2 where ψ : Rp 1, 1 [0, ) and ψ (x, y, y) = 0 for all x and y [cf. Scholkopf ×{− } → ∞ & Smola (2002, Def. 3.1)]. An example of a machine learning algorithm that minimizes a surrogate classification risk is the support vector machine (SVM) of Cortes & Vapnik (1995). The linear-basis SVM utilizes a surrogate risk func- tion, where ψ (x, y, f (x; θ)) = max 0, 1 yf (x; θ) is the hinge loss function, { − } p+1 f (x; θ) = α + β>x, and θ> = α, β> Θ R . ∈ ⊂ The task of computing (1) may be complicated by various factors that fall outside the scope of the traditional calculus formulation for optimization [cf. Khuri (2003, Ch. 7)]. Such factors include the lack of differentiability of or R difficulty in obtaining closed-form solutions to the first-order condition (FOC) equation = 0, where is the gradient operator with respect to θ, and 0 ∇θR ∇θ is a zero vector. The MM (majorization–minimization) algorithm framework is a unifying paradigm for simplifying the computation of (1) when difficulties arise, via iter- ative minimization of surrogate functions. MM algorithms are particularly at- tractive due to the monotonicity and thus stability of their objective sequences 3 as well as global convergence of their limits, in general settings. A comprehensive treatment on the theory and implementation of MM al- gorithms can be found in Lange (2016). Summaries and tutorials on MM al- gorithms for various problems can be found in Becker et al. (1997), Hunter & Lange (2004), Lange (2013, Ch. 8), Lange et al. (2000), Lange et al. (2014), McLachlan & Krishnan (2008, Sec. 7.7), Wu & Lange (2010), and Zhou et al. (2010). Some theoretical analyses of MM algorithms can be found in de Leeuw & Lange (2009), Lange (2013, Sec. 12.4), Mairal (2015), and Vaida (2005). It is known that MM algorithms are generalizations of the EM (expectation– maximization) algorithms of Dempster et al. (1977) [cf. Lange (2013, Ch. 9)]. The recently established connection between MM algorithms and the successive upper-bound maximization (SUM) algorithms of Razaviyayn et al. (2013) fur- ther shows that the MM algorithm framework also covers the concave-convex procedures [Yuille & Rangarajan (2003); CCCP], proximal algorithms (Parikh & Boyd, 2013), forward-backward splitting algorithms [Combettes & Pesquet (2011); FBS], as well as various incarnations of iteratively-reweighed least- squares algorithms (IRLS) such as those of Becker et al. (1997) and Lange et al. (2000); see Hong et al. (2016) for details. It is not possible to provide a complete list of applications of MM algorithms to machine learning, statistical estimation, and signal processing problems. We present a comprehensive albeit incomplete summary of applications of MM al- gorithms in Table 1. Further examples and references can be found in Hong et al. (2016) and Lange (2016). In this article, we will present the MM algorithm framework via applica- tions to three examples that span the scope of statistical estimation and ma- chine learning problems: Gaussian mixtures of regressions (GMR), multinomial- logistic regressions (MLR), and SVM estimations. The three estimation prob- 4 Table 1: MM algorithm applications and references. distributions estimation (Wu & Lange, 2010) t -norm regressions (Becker et al., 1997; Lange et al., 2000) d |·| ApplicationBradley-Terry models estimationConvex and shape-restricted regressionsDirichlet-multinomial distributions estimationElliptical symmetric distributions estimationFully-visible Boltzmann machines (Zhou estimation &Gaussian Lange, mixtures 2010; estimation ZhouGeometric & (Chi and Zhang, (Becker et sigmoidal 2012) et al., programming al., 2014) Heteroscedastic 1997) regressions (Nguyen (Hunter, & 2004; Wood, LangeLaplace 2016) et regression al., models 2000) estimationLeast Linear mixed models estimationLogistic and (Lange multinomial & regressions Zhou,Markov 2014) random field estimationMatrix completion (Nguyen References and & imputation McLachlan,Mixture (Nguyen 2015) of & experts McLachlan, models 2016a;Multidimensional Nguyen estimation scaling et al., 2016b) Multivariate (Daye et al., 2012;Non-negative matrix Nguyen (Bohning et factorization & al., Lindsay,Poisson (Lange 2016a) 1988; regression et Bohning, estimation al., 1992) 2014) Point to set minimization (NguyenPolygonal (Mazumder problems et distributions et (Nguyen al., estimation al., & 2016c) 2010; McLachlan,Quantile Lange 2014, regression et 2016a) estimation al., 2014) SVM estimationTransmission tomography image reconstructionVariable (Lee selection & in Seung, regression 1999) via (Lange regularization (De et Pierro, al., 1993; 2000) (Chi Becker (Hunter & et & Lange, al., (Lange Li, 2014; (Nguyen 1997) et 2005; & Chi al., McLachlan, Lange et 2000) 2016b) et al., al., 2014) 2014) (Hunter & Lange, 2000) (de Leeuw & Lange, 2009; Wu & Lange, 2010) 5 lems will firstly be presented in Section 2. The MM algorithm framework will be presented in Section 3 along with some theoretical results. MM algorithms for the three estimation problems are presented in Section 4. Numerical demon- strations of the MM algorithms are presented in Section 5. Conclusions are then drawn in Section 6. 2 Example problems 2.1 Gaussian mixture of regressions Let X arise from a distribution with unknown density function fX (x), which does not depend on the parameter θ (X can be non-stochastic). Conditioned on X = x, suppose that Y can arise from one of g N possible component ∈ regression regimes. Let Z be a random variable that indicates the component from which Y arises, such that P (Z = c) = πc, where c [g] ([g] = 1, 2, ..., g ), ∈ { } g πc > 0, and c=1 πc = 1. Write the conditional probability density of Y given X = x and ZP= c as fY X,c (y x; Bc, Σc) = φ (y; Bcx, Σc) , (2) | | q p where Bc R × , Σc is a positive-definite q q matrix covariance matrix, and ∈ × d/2 1/2 1 φ (y; µ, Σ) = (2π)− Σ − exp (y µ)> Σ (y µ) (3) | | −2 − − is the multivariate Gaussian distribution with mean vector µ and covariance matrix Σ. The conditional (in Z) characterization (2) leads to the marginal characterization g fY X (y x; θ) = πcφ (y; Bcx, Σc) , (4) | | c=1 X 6 where θ contains the parameter elements π , B , and Σ , for c [g]. We refer c c c ∈ to the characterization (4) as the GMR model. The GMR model was first proposed by Quandt (1972) for the q = 1 case and an EM algorithm for the same case was proposed in DeSarbo & Cron (1988). To the best of our knowledge, the general multivariate case (q > 1) of characterization 4 was first considered in Jones & McLachlan (1992). See McLachlan & Peel (2000) regarding mixture models in general. Given data D¯, the estimation of a GMR model requires the minimization of the negative log-likelihood risk n ¯ 1 θ; D = log fY X (yi xi; θ) R −n | | i=1 X g 1 n = log π φ (y ; B x , Σ ) .