Differentially Private Stochastic Coordinate Descent

Total Page:16

File Type:pdf, Size:1020Kb

Differentially Private Stochastic Coordinate Descent Differentially Private Stochastic Coordinate Descent Georgios Damaskinos,1* Celestine Mendler-Dunner,¨ 2* Rachid Guerraoui,1 Nikolaos Papandreou,3 Thomas Parnell3 1Ecole´ Polytechnique Fed´ erale´ de Lausanne (EPFL), Switzerland 2University of California, Berkeley 3IBM Research, Zurich georgios.damaskinos@epfl.ch, [email protected], rachid.guerraoui@epfl.ch, fnpo,[email protected] Abstract Our goal is to extend SCD with mechanisms that preserve In this paper we tackle the challenge of making the stochastic data privacy, and thus enable the reuse of large engineer- coordinate descent algorithm differentially private. Com- ing efforts invested in SCD (Bradley et al. 2011a; Ioannou, pared to the classical gradient descent algorithm where up- Mendler-Dunner,¨ and Parnell 2019), for privacy-critical use- dates operate on a single model vector and controlled noise cases. In this paper we therefore ask the question: Can SCD addition to this vector suffices to hide critical information maintain its benefits (convergence guarantees, low tuning about individuals, stochastic coordinate descent crucially re- cost) alongside strong privacy guarantees? lies on keeping auxiliary information in memory during train- We employ differential privacy (DP) (Dwork, Roth et al. ing. This auxiliary information provides an additional privacy 2014) as our mathematical definition of privacy. DP provides leak and poses the major challenge addressed in this work. a formal guarantee for the amount of leaked information re- Driven by the insight that under independent noise addition, garding participants in a database, given a strong adversary the consistency of the auxiliary information holds in expect- ation, we present DP-SCD, the first differentially private and the output of a mechanism that processes this database. stochastic coordinate descent algorithm. We analyze our new DP provides a fine-grained way of measuring privacy across method theoretically and argue that decoupling and parallel- multiple subsequent executions of such mechanisms and is izing coordinate updates is essential for its utility. On the em- thus well aligned with the iterative nature of ML algorithms. pirical side we demonstrate competitive performance against The main challenge of making SCD differentially private the popular stochastic gradient descent alternative (DP-SGD) is that an efficient implementation stores and updates not while requiring significantly less tuning. only the model vector α but also an auxiliary vector v := Xα to avoid recurring computations. These two vectors are 1 Introduction coupled by the data matrix X and need to be consistent for Stochastic coordinate descent (SCD) (Wright 2015) is an ap- standard convergence results to hold. However, to provide pealing optimization algorithm. Compared with the classical rigorous privacy guarantees, it is vital to add independent stochastic gradient descent (SGD), SCD does not require a noise to both vectors which prohibits this consistency. learning rate to be tuned and can often have favorable con- Contributions. We present DP-SCD, a differentially vergence behavior (Fan et al. 2008; Dunner¨ et al. 2018; Ma private version of the SCD algorithm (Shalev-Shwartz and et al. 2015; Hsieh, Yu, and Dhillon 2015). In particular, for Zhang 2013b) with formal privacy guarantees. In particular, training generalized linear models, SCD is the algorithm of we make the following contributions. choice for many applications and has been implemented as • We extend SCD to compute each model update as an ag- a default solver in several popular packages such as Scikit- gregate of independent updates computed on a random learn, TensorFlow and Liblinear (Fan et al. 2008). arXiv:2006.07272v4 [cs.LG] 14 Mar 2021 data sample. This parallelism is crucial for the utility of However, SCD is not designed with privacy concerns in the algorithm because it reduces the amount of the noise mind: SCD builds models that may leak sensitive informa- per sample that is necessary to guarantee DP. tion regarding the training data records. This is a major issue • We provide the first analysis of SCD in the DP setting for privacy-critical domains such as health care where mod- and derive a bound on the maximum level of noise that els are trained based on the medical records of patients. can be tolerated to guarantee a given level of utility. Nevertheless, the low tuning cost of SCD is a particularly • We empirically show that for problems where SCD has appealing property for privacy-preserving machine learning closed form updates, DP-SCD achieves a better privacy- (ML). Hyperparameter tuning costs not only in terms of ad- utility trade-off compared to the popular DP-SGD al- ditional computation, but also in terms of spending the pri- gorithm (Abadi et al. 2016) while, at the same time, be- vacy budget. ing free of a learning rate hyperparameter that needs tun- *Work partially conducted while at IBM Research, Zurich. ing. Our implementation is available1. Copyright © 2021, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 1https://github.com/gdamaskinos/dpscd N ∗ 2 Preliminaries where α 2 R denotes the dual model vector and `i the Before we dive into the details of making SCD differentially convex conjugate of the loss function `i. Since the dual ob- private, we first formally define the problem class considered jective (Problem (3)) depends on the data matrix through the in this paper and provide the necessary background on SCD linear map Xα, the auxiliary vector is naturally defined as and differential privacy. v := Xα in SDCA. We use the first order optimality condi- tions to relate the primal and the dual model vectors which 1 2.1 Setup results in θ(α) = λN Xα and leads to the important defin- We target the training of Generalized Linear Models ition of the duality gap (Dunner¨ et al. 2016): (GLMs), the class of models to which SCD is most com- Gap(α) := F ∗(α; X) + F(θ(α); X) monly applied. This class includes convex problems of the λ λ following form: = hXα; θ(α)i + kθ(α)k2 + kθk2 (4) 2 2 ( N ) 1 X > λ 2 min F(θ; X) := min `i(xi θ) + kθk (1) By the construction of the two problems, the optimal val- θ θ N 2 i=1 ues for the objectives match in the convex setting and the M duality gap attains zero (Shalev-Shwartz and Zhang 2013b). The model vector θ 2 R is learnt from the training θ M×N Therefore, the model can be learnt from solving either dataset X 2 R that contains the N training examples θ(α) M Problem (1) or Problem (3), where we use the map xi 2 R as columns, λ > 0 denotes the regularization para- to obtain the final solution when solving Problem (3). meter, and `i the convex loss functions. The norm k·k refers While the two problems have similar structure, they are to the L2-norm. For the rest of the paper we use the com- quite different from a privacy perspective due to their data mon assumption that for all i 2 [N] the data examples xi access patterns. When applied to the dual, SCD computes are normalized, i.e., kxik = 1, and that the loss functions `i each update by processing a single example at a time, are 1/µ-smooth. A wide range of ML models fall into this whereas the primal SCD processes one coordinate across all setup including ridge regression and L2-regularized logistic the examples. Several implications arise as differential pri- regression (Shalev-Shwartz and Zhang 2013b). vacy is defined on a per-example basis. Threat model. We consider the classical threat model used in (Abadi et al. 2016). In particular, we assume that an 2.3 Differentially Private Machine Learning adversary has white-box access to the training procedure (al- Differential privacy is a guarantee for a function f applied gorithm, hyperparameters, and intermediate output) and can to a database of sensitive data (Dwork, Roth et al. 2014). have access even to Xnxk, where xk is the data instance the In the context of supervised ML, this function is the update adversary is targeting. However, the adversary cannot have function of the algorithm and the data is typically a set of access to the intermediate results of any update computation. input-label pairs (xi; yi) that are used during model training. We make this assumption more explicit in §3.1. Two input datasets are adjacent if they differ only in a single input-label pair. Querying the model translates into making 2.2 Primal-Dual Stochastic Coordinate Descent predictions for the label of some new input. The primal SCD algorithm repeatedly selects a coordinate Definition 1 (Differential privacy). A randomized mechan- j 2 [M] at random, solves a one dimensional auxiliary prob- ism M : D! R satisfies (, δ)-DP if for any two adjacent lem and updates the parameter vector θ: inputs d; d0 2 D and for any subset of outputs S ⊆ R it 0 + ? ? holds that: Pr[M(d) 2 S] ≤ e Pr[M(d ) 2 S] + δ. θ θ + ej ζ where ζ = arg min F(θ + ej ζ; X) (2) ζ The Gaussian mechanism. The Gaussian mechanism where ej denotes the unit vector with value 1 at position j. is a popular method for making a deterministic function Problem (2) often has a closed form solution; otherwise F f : D! R differentially private. By adding Gaussian noise can be replaced by its second-order Taylor approximation. to the output of the function we can hide particularities of A crucial approach for improving the computational com- individual input values. The resulting mechanism is defined 2 2 plexity of each SCD update is to keep an auxiliary vector as: M(d) := f(d) + N (0;Sf σ ) where the variance of the > v := X θ in memory.
Recommended publications
  • A Generic Coordinate Descent Solver for Nonsmooth Convex Optimization Olivier Fercoq
    A generic coordinate descent solver for nonsmooth convex optimization Olivier Fercoq To cite this version: Olivier Fercoq. A generic coordinate descent solver for nonsmooth convex optimization. Optimization Methods and Software, Taylor & Francis, 2019, pp.1-21. 10.1080/10556788.2019.1658758. hal- 01941152v2 HAL Id: hal-01941152 https://hal.archives-ouvertes.fr/hal-01941152v2 Submitted on 26 Sep 2019 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. A generic coordinate descent solver for nonsmooth convex optimization Olivier Fercoq LTCI, T´el´ecom ParisTech, Universit´eParis-Saclay, 46 rue Barrault, 75634 Paris Cedex 13, France ARTICLE HISTORY Compiled September 26, 2019 ABSTRACT We present a generic coordinate descent solver for the minimization of a nonsmooth convex ob- jective with structure. The method can deal in particular with problems with linear constraints. The implementation makes use of efficient residual updates and automatically determines which dual variables should be duplicated. A list of basic functional atoms is pre-compiled for effi- ciency and a modelling language in Python allows the user to combine them at run time. So, the algorithm can be used to solve a large variety of problems including Lasso, sparse multinomial logistic regression, linear and quadratic programs.
    [Show full text]
  • 12. Coordinate Descent Methods
    EE 546, Univ of Washington, Spring 2014 12. Coordinate descent methods theoretical justifications • randomized coordinate descent method • minimizing composite objectives • accelerated coordinate descent method • Coordinate descent methods 12–1 Notations consider smooth unconstrained minimization problem: minimize f(x) x RN ∈ n coordinate blocks: x =(x ,...,x ) with x RNi and N = N • 1 n i ∈ i=1 i more generally, partition with a permutation matrix: U =[PU1 Un] • ··· n T xi = Ui x, x = Uixi i=1 X blocks of gradient: • f(x) = U T f(x) ∇i i ∇ coordinate update: • x+ = x tU f(x) − i∇i Coordinate descent methods 12–2 (Block) coordinate descent choose x(0) Rn, and iterate for k =0, 1, 2,... ∈ 1. choose coordinate i(k) 2. update x(k+1) = x(k) t U f(x(k)) − k ik∇ik among the first schemes for solving smooth unconstrained problems • cyclic or round-Robin: difficult to analyze convergence • mostly local convergence results for particular classes of problems • does it really work (better than full gradient method)? • Coordinate descent methods 12–3 Steepest coordinate descent choose x(0) Rn, and iterate for k =0, 1, 2,... ∈ (k) 1. choose i(k) = argmax if(x ) 2 i 1,...,n k∇ k ∈{ } 2. update x(k+1) = x(k) t U f(x(k)) − k i(k)∇i(k) assumptions f(x) is block-wise Lipschitz continuous • ∇ f(x + U v) f(x) L v , i =1,...,n k∇i i −∇i k2 ≤ ik k2 f has bounded sub-level set, in particular, define • ⋆ R(x) = max max y x 2 : f(y) f(x) y x⋆ X⋆ k − k ≤ ∈ Coordinate descent methods 12–4 Analysis for constant step size quadratic upper bound due to block coordinate-wise
    [Show full text]
  • Decomposable Submodular Function Minimization: Discrete And
    Decomposable Submodular Function Minimization: Discrete and Continuous Alina Ene∗ Huy L. Nguy˜ên† László A. Végh‡ March 7, 2017 Abstract This paper investigates connections between discrete and continuous approaches for decomposable submodular function minimization. We provide improved running time estimates for the state-of-the-art continuous algorithms for the problem using combinatorial arguments. We also provide a systematic experimental comparison of the two types of methods, based on a clear distinction between level-0 and level-1 algorithms. 1 Introduction Submodular functions arise in a wide range of applications: graph theory, optimization, economics, game theory, to name a few. A function f : 2V R on a ground set V is submodular if f(X)+ f(Y ) f(X Y )+ f(X Y ) for all sets X, Y V . Submodularity→ can also be interpreted as a decreasing marginals≥ property.∩ ∪ ⊆ There has been significant interest in submodular optimization in the machine learning and computer vision communities. The submodular function minimization (SFM) problem arises in problems in image segmenta- tion or MAP inference tasks in Markov Random Fields. Landmark results in combinatorial optimization give polynomial-time exact algorithms for SFM. However, the high-degree polynomial dependence in the running time is prohibitive for large-scale problem instances. The main objective in this context is to develop fast and scalable SFM algorithms. Instead of minimizing arbitrary submodular functions, several recent papers aim to exploit special structural properties of submodular functions arising in practical applications. A popular model is decomposable sub- modular functions: these can be written as sums of several “simple” submodular functions defined on small arXiv:1703.01830v1 [cs.LG] 6 Mar 2017 supports.
    [Show full text]
  • Arxiv:1706.05795V4 [Math.OC] 4 Nov 2018
    BCOL RESEARCH REPORT 17.02 Industrial Engineering & Operations Research University of California, Berkeley, CA 94720{1777 SIMPLEX QP-BASED METHODS FOR MINIMIZING A CONIC QUADRATIC OBJECTIVE OVER POLYHEDRA ALPER ATAMTURK¨ AND ANDRES´ GOMEZ´ Abstract. We consider minimizing a conic quadratic objective over a polyhe- dron. Such problems arise in parametric value-at-risk minimization, portfolio optimization, and robust optimization with ellipsoidal objective uncertainty; and they can be solved by polynomial interior point algorithms for conic qua- dratic optimization. However, interior point algorithms are not well-suited for branch-and-bound algorithms for the discrete counterparts of these problems due to the lack of effective warm starts necessary for the efficient solution of convex relaxations repeatedly at the nodes of the search tree. In order to overcome this shortcoming, we reformulate the problem us- ing the perspective of the quadratic function. The perspective reformulation lends itself to simple coordinate descent and bisection algorithms utilizing the simplex method for quadratic programming, which makes the solution meth- ods amenable to warm starts and suitable for branch-and-bound algorithms. We test the simplex-based quadratic programming algorithms to solve con- vex as well as discrete instances and compare them with the state-of-the-art approaches. The computational experiments indicate that the proposed al- gorithms scale much better than interior point algorithms and return higher precision solutions. In our experiments, for large convex instances, they pro- vide up to 22x speed-up. For smaller discrete instances, the speed-up is about 13x over a barrier-based branch-and-bound algorithm and 6x over the LP- based branch-and-bound algorithm with extended formulations.
    [Show full text]
  • Research Article Stochastic Block-Coordinate Gradient Projection Algorithms for Submodular Maximization
    Hindawi Complexity Volume 2018, Article ID 2609471, 11 pages https://doi.org/10.1155/2018/2609471 Research Article Stochastic Block-Coordinate Gradient Projection Algorithms for Submodular Maximization Zhigang Li,1 Mingchuan Zhang ,2 Junlong Zhu ,2 Ruijuan Zheng ,2 Qikun Zhang ,1 and Qingtao Wu 2 1 School of Computer and Communication Engineering, Zhengzhou University of Light Industry, Zhengzhou, 450002, China 2Information Engineering College, Henan University of Science and Technology, Luoyang, 471023, China Correspondence should be addressed to Mingchuan Zhang; zhang [email protected] Received 25 May 2018; Accepted 26 November 2018; Published 5 December 2018 Academic Editor: Mahardhika Pratama Copyright © 2018 Zhigang Li et al. Tis is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We consider a stochastic continuous submodular huge-scale optimization problem, which arises naturally in many applications such as machine learning. Due to high-dimensional data, the computation of the whole gradient vector can become prohibitively expensive. To reduce the complexity and memory requirements, we propose a stochastic block-coordinate gradient projection algorithm for maximizing continuous submodular functions, which chooses a random subset of gradient vector and updates the estimates along the positive gradient direction. We prove that the estimates of all nodes generated by the algorithm converge to ∗ some stationary points with probability 1. Moreover, we show that the proposed algorithm achieves the tight ((� /2)� − �) 2 min approximation guarantee afer �(1/� ) iterations for DR-submodular functions by choosing appropriate step sizes. Furthermore, ((�2/(1 + �2))� �∗ − �) �(1/�2) we also show that the algorithm achieves the tight min approximation guarantee afer iterations for weakly DR-submodular functions with parameter � by choosing diminishing step sizes.
    [Show full text]
  • A Hybrid Method of Combinatorial Search and Coordinate Descent For
    A Hybrid Method of Combinatorial Search and Coordinate Descent for Discrete Optimization Ganzhao Yuan [email protected] School of Data and Computer Science, Sun Yat-sen University (SYSU), P.R. China Li Shen [email protected] Tencent AI Lab, Shenzhen, P.R. China Wei-Shi Zheng [email protected] School of Data and Computer Science, Sun Yat-sen University (SYSU), P.R. China Editor: Abstract Discrete optimization is a central problem in mathematical optimization with a broad range of applications, among which binary optimization and sparse optimization are two common ones. However, these problems are NP-hard and thus difficult to solve in general. Combinatorial search methods such as branch-and-bound and exhaustive search find the global optimal solution but are confined to small-sized problems, while coordinate descent methods such as coordinate gradient descent are efficient but often suffer from poor local minima. In this paper, we consider a hybrid method that combines the effectiveness of combinatorial search and the efficiency of coordinate descent. Specifically, we consider random strategy or/and greedy strategy to select a subset of coordinates as the working set, and then perform global combinatorial search over the working set based on the original objective function. In addition, we provide some optimality analysis and convergence analysis for the proposed method. Our method finds stronger stationary points than existing methods. Finally, we demonstrate the efficacy of our method on some sparse optimization and binary optimization applications. As a result, our method achieves state-of-the-art performance in terms of accuracy. For example, our method generally outperforms the well-known orthogonal matching pursuit method in sparse optimization.
    [Show full text]
  • On the Links Between Probabilistic Graphical Models and Submodular Optimisation Senanayak Sesh Kumar Karri
    On the Links between Probabilistic Graphical Models and Submodular Optimisation Senanayak Sesh Kumar Karri To cite this version: Senanayak Sesh Kumar Karri. On the Links between Probabilistic Graphical Models and Submodular Optimisation. Machine Learning [cs.LG]. Université Paris sciences et lettres, 2016. English. NNT : 2016PSLEE047. tel-01753810 HAL Id: tel-01753810 https://tel.archives-ouvertes.fr/tel-01753810 Submitted on 29 Mar 2018 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. THESE` DE DOCTORAT de l’Universite´ de recherche Paris Sciences Lettres PSL Research University Prepar´ ee´ a` l’Ecole´ normale superieure´ On the Links between Probabilistic Graphical Models and Submodular Optimisation Liens entre modeles` graphiques probabilistes et optimisation sous-modulaire Ecole´ doctorale n◦386 ECOLE´ DOCTORALE DE SCIENCES MATHEMATIQUES´ DE PARIS CENTRE Specialit´ e´ INFORMATIQUE COMPOSITION DU JURY : M Andreas Krause ETH Zurich, Rapporteur M Nikos Komodakis ENPC Paris, Rapporteur M Francis Bach Inria Paris, Directeur de these` Soutenue par Senanayak Sesh Kumar KARRI le 27.09.2016 M Josef Sivic ENS Paris, Membre du Jury Dirigee´ par Francis BACH M Antonin Chambolle CMAP EP Paris, Membre du Jury M Guillaume Obozinski ENPC Paris, Membre du Jury RESEARCH UNIVERSITY PARIS ÉCOLENORMALE SUPÉRIEURE What is the purpose of life? Proof and Conjecture, ....
    [Show full text]
  • An Iterated Coordinate Descent Algorithm for Regularized 0/1 Loss Minimization
    An iterated coordinate descent algorithm for regularized 0/1 loss minimization Usman Roshan Department of Computer Science, New Jersey Institute of Technology, [email protected] Abstract. We propose regularization and iterated coordinate descent for better 0/1 loss optimization. The 0/1 loss is less sensitive to outliers than the popular hinge and logistic loss particularly when outliers are misclassified. However, it also suffers from local minima rendering coor- dinate descent ineffective. A popular method to deal with local minima is to perform multiple coordinate descent searches starting from differ- ent random initial solutions. We propose using a less explored approach called iterated local search, where the idea is to continue the search after a slight perturbation to the local minima. This preserves structure in the local minimum and also allows for an overall deeper search compared to random restart alone. Inspired by this we implement an iterated coordi- nate descent algorithm for a regularized 0/1 loss objective. Our algorithm performs a perturbation to the local minima reached at the end of co- ordinate descent, resumes the search from the perturbation until a new minima is reached, and continues in this manner for a specified number of iterations. We show empirically on real data that this leads to a sig- nificant improvement in the 0/1 loss compared to random restarts both on training and test data. We also see that optimizing the regularized 0/1 loss gives lower objective values than optimizing the 0/1 loss alone on training and test data. Our results show that the instability of 0/1 loss is improved by regularization and that iterated coordinate descent gives lower loss values than random restart alone.
    [Show full text]
  • Large Scale Optimization for Machine Learning
    Large Scale Optimization for Machine Learning Meisam Razaviyayn Lecture 21 [email protected] Announcements: • Midterm exams • Return it next Tuesday 1 Non-smooth Objective Function • Sub-gradient • Typically slow and no good termination criteria (other than cross validation) • Proximal Gradient • Fast assuming each iteration is easy • Block Coordinate Descent • Also helpful for exploiting multi-block structure • Alternating Direction Method of Multipliers (ADMM) • Will be covered later 1 Multi-Block Structure and BCD Method Block Coordinate Descent (BCD) Method: At iteration r, choose an index i and Very different than previous Choice of index i: Cyclic, randomized, Greedy incremental GD, SGD,… Simple and scalable: Lasso example Geometric Interpretation 2 Convergence of BCD Method Assumptions: “Nonlinear programming”, D.P. Bertsekas for cyclic update rule • Separable Constraints Proof? • Differentiable/smooth objective Necessary assumptions? • Unique minimizer at each step 3 Necessity of Smoothness Assumption Not “Regular” smooth Examples: Lasso 4 BCD and Non-smooth Objective Theorem [Tseng 2001] Assume 1) Feasible set is compact. 2) The uniqueness of minimizer at each step. Every limit point of the iterates 3) Separable constraint is a stationary point 4) Regular objective function True for cyclic/randomized/greedy rule Definition of stationarity for nonsmooth? Rate of convergence of BCD: • Similar to GD: sublinear for general convex and linear for strongly convex • Same results can be shown in most of the non-smooth popular objectives
    [Show full text]
  • Random Batch Methods (RBM) for Interacting Particle Systems
    Interacting Particle Systems: random batch methods for classical and quantum N-body problems, and non-convex optimization Shi Jin (金石) Shanghai Jiao Tong University, China outlines • Random batch methods for classical interacting particles • Random batch methods for N-body quantum dynamics • Consensus-based interacting particle systems for non-convex optimization (gradient-free) (binary) Interacting particle systems • We use first-order binary interacting system as an example but the idea works for second order system (Newton’s second law type) and more interacting particles • The idea works with or without the Brownian motion term Applications • Physics and chemistry: molecular dynamics, electrostatics, astrophysics (stars, galaxies) … • Biology: flocking, swarming, chemotaxis, … • Social sciences: wealth distribution, opinion dynamics, pedestrian dynamics … • Data sciences: clustering,… • Numerical particle methods for kinetic/mean-field equations • Courtesy of P.E. Jabin The computational cost It is clear to be of per time step (or if it is J particle interaction) Fast Multipole Methods were developed to address this issue for binary interactions We introduce the random batch methods to reduce the computational cost to per time step and it works for general interacting potentials The random batch methods (with Lei Li, SJTU and J.G. Liu, Duke) at each time step, • random shuffling: group N particles randomly to n groups (batches), each batch has p particles • particles interacting only inside their own batches Algorithm 1 (RBM-1) •
    [Show full text]
  • Efficient Greedy Coordinate Descent Via Variable Partitioning
    Efficient Greedy Coordinate Descent via Variable Partitioning Huang Fang, Guanhua Fang, Tan Yu, Ping Li Cognitive Computing Lab Baidu Research 10900 NE 8th St. Bellevue, WA 98004, USA {fangazq877, fanggh2018, Tanyuuynat, pingli98}@gmail.com Abstract large-scale optimization problems. Due to the low numer- ical accuracy required by most machine learning and data mining tasks, first-order methods with cheap cost per itera- Greedy coordinate descent (GCD) is an efficient tion dominate certain fields recently. In particular stochastic optimization algorithm for a wide range of ma- gradient descent (SGD) and coordinate descent (CD) are chine learning and data mining applications. GCD two most important representatives. SGD and its variants are could be significantly faster than randomized co- dominant for the big-N problems i.e., problems with a large ordinate descent (RCD) if they have similar per number of samples (Robbins and Monro, 1951; Ghadimi iteration cost. Nevertheless, in some cases, the and Lan, 2013; Johnson and Zhang, 2013; Defazio et al., greedy rule used in GCD cannot be efficiently im- 2014; Nguyen et al., 2017; Schmidt et al., 2017) while CD plemented, leading to huge per iteration cost and and its variants are highly effective in handling the struc- making GCD slower than RCD. To alleviate the tured big-p problems i.e., problems with a large number cost per iteration, the existing solutions rely on of features (Bertsekas and Tsitsiklis, 1989; Luo and Tseng, maximum inner product search (MIPS) as an ap- 1992, 1993; Tseng, 2001; Nesterov, 2012; Lee and Sid- proximate greedy rule. But it has been empirically ford, 2013; Shalev-Shwartz and Zhang, 2013; Richtárik and shown that GCD with approximate greedy rule Takác, 2014; Wright, 2015; Lu and Xiao, 2015; Allen-Zhu could suffer from slow convergence even with the et al., 2016; Zhang and Xiao, 2017).
    [Show full text]
  • Block Stochastic Gradient Method
    Block Stochastic Gradient Iteration for Convex and Nonconvex Optimization Yangyang Xu∗ Wotao Yiny March 3, 2015 Abstract The stochastic gradient (SG) method can quickly solve a problem with a large number of components in the objective, or a stochastic optimization problem, to a moderate accuracy. The block coordinate descent/update (BCD) method, on the other hand, can quickly solve problems with multiple (blocks of) variables. This paper introduces a method that combines the great features of SG and BCD for problems with many components in the objective and with multiple (blocks of) variables. This paper proposes a block stochastic gradient (BSG) method for both con- vex and nonconvex programs. BSG generalizes SG by updating all the blocks of variables in the Gauss-Seidel type (updating the current block depends on the previously updated block), in either a fixed or randomly shuffled order. Although BSG has slightly more work at each iteration, it typically outperforms SG be- cause of BSG's Gauss-Seidel updates and larger stepsizes, the latter of which are determined by the smaller per-block Lipschitz constants. The convergence of BSG is established for both convex and nonconvex cases. arXiv:1408.2597v3 [math.OC] 2 Mar 2015 In the convex case, BSG has the same order of convergence rate as SG. In the nonconvex case, its convergence is established in terms of the expected violation of a first-order optimality condition. In both cases our analysis is nontrivial since the typical unbiasedness assumption no longer holds. BSG is numerically evaluated on the following problems: stochastic least squares and logistic regression, which are convex, and low-rank tensor recov- ery and bilinear logistic regression, which are nonconvex.
    [Show full text]