Exploiting Shared Representations for Personalized Federated Learning

Exploiting Shared Representations for Personalized Federated Learning

Exploiting Shared Representations for Personalized Federated Learning Liam Collins 1 Hamed Hassani 2 Aryan Mokhtari 1 Sanjay Shakkottai 1 Abstract effective models for each client by leveraging the local com- putational power, memory, and data of all clients (McMahan Deep neural networks have shown the ability to et al., 2017). The task of coordinating between the clients extract universal feature representations from data is fulfilled by a central server that combines the models such as images and text that have been useful received from the clients at each round and broadcasts the for a variety of learning tasks. However, the updated information to them. Importantly, the server and fruits of representation learning have yet to be clients are restricted to methods that satisfy communica- fully-realized in federated settings. Although data tion and privacy constraints, preventing them from directly in federated settings is often non-i.i.d. across applying centralized techniques. clients, the success of centralized deep learning suggests that data often shares a global feature However, one of the most important challenges in federated representation, while the statistical heterogene- learning is the issue of data heterogeneity, where the under- ity across clients or tasks is concentrated in the lying data distribution of client tasks could be substantially labels. Based on this intuition, we propose a different from each other. In such settings, if the server novel federated learning framework and algorithm and clients learn a single shared model (e.g., by minimizing for learning a shared data representation across average loss), the resulting model could perform poorly for clients and unique local heads for each client. many of the clients in the network (and also not generalize Our algorithm harnesses the distributed compu- well across diverse data (Jiang et al., 2019)). In fact, for tational power across clients to perform many some clients, it might be better to simply use their own local-updates with respect to the low-dimensional local data (even if it is small) to train a local model; see local parameters for every update of the represen- Figure1. Finally, the (federated) trained model may not tation. We prove that this method obtains linear generalize well to unseen clients that have not participated convergence to the ground-truth representation in the training process. These issues raise this question: with near-optimal sample complexity in a linear setting, demonstrating that it can efficiently re- “How can we exploit the data and computational duce the problem dimension for each client. Fur- power of all clients in data heterogeneous settings ther, we provide extensive experimental results to learn a personalized model for each client?” demonstrating the improvement of our method over alternative personalized federated learning We address this question by taking advantage of the com- approaches in heterogeneous settings. mon representation among clients. Specifically, we view the data heterogeneous federated learning problem as n par- allel learning tasks that they possibly have some common 1. Introduction structure, and our goal is to learn and exploit this com- mon representation to improve the quality of each client’s Many of the most heralded successes of modern machine model. Indeed, this would be in line with our understanding learning have come in centralized settings, wherein a single from centralized learning, where we have witnessed success model is trained on a large amount of centrally-stored data. in training multiple tasks simultaneously by leveraging a The growing number of data-gathering devices, however, common (low-dimensional) representation in popular ma- calls for a distributed architecture to train models. Feder- chine learning tasks (e.g., image classification, next-word ated learning aims at addressing this issue by providing a prediction) (Bengio et al., 2013; LeCun et al., 2015). platform in which a group of clients collaborate to learn Main Contributions. We introduce a novel federated learn- 1University of Texas at Austin 2University of Pennsylvania. ing framework and an associated algorithm for data hetero- Correspondence to: Liam Collins <[email protected]>. geneous settings. Next, we present our main contributions. Proceedings of the 38 th International Conference on Machine Learning, PMLR 139, 2021. Copyright 2021 by the author(s). (i) FedRep Algorithm. Federated Representation Learn- Exploiting Shared Representations for Personalized Federated Learning Local MSE for d = 20, k = 2, n = 100 client away from the best averaged representation, and thus 10−1 Local Only hurts performance. FedAvg 10−2 FedRep (II) Gains of cooperation. Denote d to be the data dimension 10−3 and n the number of clients. From our sample complexity 10−4 bounds, it follows that with FedRep, the sample complexity d Average MSE per client scales as Θ(log(n) + =n). On the other hand, −5 10 local learning (without any collaboration) has a sample com- 10−6 plexity that scale as Θ(d): Thus, if 1 n eΘ(d) (see Section 4.2 for details), we expect benefits of collabora- 0.25*d 0.5*d d Number of training samples/user tion through federation. When d is large (as is typical in Θ(d) Figure 1. Local only training suffers in small-training data regimes, practice), e is exponentially larger, and federation helps whereas training a single global model (FedAvg) cannot overcome each client. To the best of our knowledge, this is the first client heterogeneity even when the number of training samples is sample-complexity-based result for heterogeneous federated large. FedRep exploits a common representation of the clients to learning that demonstrates the benefit of cooperation. achieve small error in all cases. (III) Generalization to new clients. For a new client, since a ready-made representation is available, the client only needs ing (FedRep) leverages the full quantity of data stored to learn a head with a low-dimensional representation of across clients to learn a global low-dimensional rep- dimension k. Thus, its sample complexity scales only as resentation using gradient-based updates. Further, it Θ(k log(1=)) to have no more than error in accuracy. enables each client to compute a personalized, low- dimensional classifier, which we term as the client’s Related Work. A variety of recent works have studied local head, that accounts for the unique labeling of personalization in federated learning using, for example, each client’s local data. local fine-tuning (Wang et al., 2019; Yu et al., 2020), meta- learning (Chen et al., 2018; Khodak et al., 2019; Jiang et al., (ii) Convergence Rate. We show that FedRep converges 2019; Fallah et al., 2020), additive mixtures of local and to the optimal representation at a exponentially fast rate global models (Hanzely & Richtarik´ , 2020; Deng et al., with near-optimal sample complexity in the case that 2020; Mansour et al., 2020), and multi-task learning (Smith each client aims to solve a linear regression problem et al., 2017). In all of these methods, each client’s subprob- with a two-layer linear neural network. Our analysis lem is still full-dimensional - there is no notion of learning further implies that we only need O 2 3 (κ k (log(n) + a dimensionality-reduced set of local parameters. More re- κ2kd n )) samples per client, where n is the number of cently, Liang et al.(2020) also proposed a representation clients, d is the dimension of the data, k is the repre- learning method for federated learning, but their method sentation dimension and κ is the condition number of attempts to learn many local representations and a single the ground-truth client-representation matrix. global head as opposed to a single global representation (iii) Empirical Results. Through a combination of syn- and many local heads. Earlier, Arivazhagan et al.(2019) thetic and real datasets (CIFAR10, CIFAR100, FEM- presented an algorithm to learn local heads and a global net- NIST, Sent140) we show the benefits of FedRep in: work body, but their local procedure jointly updates the head (a) leveraging many local updates, (b) robustness to and body (using the same number of updates), and they did different levels of heterogeneity, and (c) generalization not provide any theoretical justification for their proposed to new clients. We consider several important baselines method. Meanwhile, another line of work has studied feder- including FedAvg (McMahan et al., 2017), Fed-MTL ated learning in heterogeneous settings (Karimireddy et al., (Smith et al., 2017), LG-FedAvg (Liang et al., 2020), 2020; Wang et al., 2020; Pathak & Wainwright, 2020; Had- and Per-FedAvg (Fallah et al., 2020). Our experiments dadpour et al., 2020; Reddi et al., 2020; Reisizadeh et al., indicate that FedRep outpeforms these baselines in het- 2020; Mitra et al., 2021), and the optimization-based in- erogeneous settings that share a global representation. sights from these works may be used to supplement our formulation and algorithm. Benefits of FedRep. Next, we list benefits of FedRep over standard federated learning (that learns a single model). 2. Problem Formulation (I) More local updates. By reducing the problem dimension, The generic form of federated learning with n clients is each client can make many local updates at each communica- tion round, which is beneficial in learning its own individual n head. This is unlike standard federated learning where mul- 1 min fi(qi); (1) tiple local updates in a heterogeneous setting moves each (q1;:::;qn) n n 2Q i=1 X Exploiting Shared Representations for Personalized Federated Learning server where fi and qi are the error function and learning model for the i-th client, respectively, and Q is the space of feasible 1 n φt = φt rn i sets of n models. We consider a supervised setting in which i t the data for the i-th client is generated by a distribution X2I t d φt φ (x ; y ) ∼ D .

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    11 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us