Relational Knowledge Distillation

Relational Knowledge Distillation

Relational Knowledge Distillation Wonpyo Park* Dongju Kim Yan Lu Minsu Cho POSTECH, Kakao Corp. POSTECH Microsoft Research POSTECH http://cvlab.postech.ac.kr/research/RKD/ Abstract Input Knowledge distillation aims at transferring knowledge acquired in one model (a teacher) to another model (a stu- DNN dent) that is typically smaller. Previous approaches can be 푇 푇 expressed as a form of training the student to mimic output Output 푓푇 푓푆 푓 푓푆 푓 푓푆 activations of individual data examples represented by the 3 1 푠1 푡2 푠2 푡3 푠 teacher. We introduce a novel approach, dubbed relational 푡 knowledge distillation (RKD), that transfers mutual rela- tions of data examples instead. For concrete realizations of RKD, we propose distance-wise and angle-wise distilla- tion losses that penalize structural differences in relations. Experiments conducted on different tasks show that the pro- posed method improves educated student models with a sig- nificant margin. In particular for metric learning, it allows students to outperform their teachers’ performance, achiev- Figure 1: Relational Knowledge Distillation. While con- ing the state of the arts on standard benchmark datasets. ventional KD transfers individual outputs from a teacher model (fT ) to a student model (fS) point-wise, our ap- proach transfers relations of the outputs structure-wise. It 1. Introduction can be viewed as a generalization of conventional KD. guistic structuralism [19], which focuses on structural rela- Recent advances in computer vision and artificial intel- tions in a semiological system. Saussure’s concept of the ligence have largely been driven by deep neural networks relational identity of signs is at the heart of structuralist the- with many layers, and thus current state-of-the-art models ory; “In a language, as in every other semiological system, typically require a high cost of computation and memory what distinguishes a sign is what constitutes it” [30]. In this in inference. One promising direction for mitigating this perspective, the meaning of a sign depends on its relations computational burden is to transfer knowledge in the cum- with other signs within the system; a sign has no absolute bersome model (a teacher) into a small model (a student). meaning independent of the context. To this end, there exist two main questions: (1) ‘what con- stitutes the knowledge in a learned model?’ and (2) ‘how The central tenet of our work is that what constitutes the relations to transfer the knowledge into another model?’. Knowledge knowledge is better presented by of the learned distillation (or transfer) (KD) methods [3, 4, 11] assume the representations than individuals of those; an individual data e.g knowledge as a learned mapping from inputs to outputs, and example, ., an image, obtains a meaning in relation to transfer the knowledge by training the student model with or in contrast with other data examples in a system of rep- the teacher’s outputs (of the last or a hidden layer) as targets. resentation, and thus primary information lies in the struc- Recently, KD has turned out to be very effective not only in ture in the data embedding space. On this basis, we intro- Relational Knowl- training a student model [1, 11, 12, 27, 47] but also in im- duce a novel approach to KD, dubbed edge Distillation proving a teacher model itself by self-distillation [2, 9, 45]. (RKD), that transfers structural relations of outputs rather than individual outputs themselves (Fig- In this work, we revisit KD from a perspective of the lin- ure 1). For its concrete realizations, we propose two RKD *The work was done when Wonpyo Park was an intern at MSR. losses: distance-wise (second-order) and angle-wise (third- 13967 order) distillation losses. RKD can be viewed as a gener- ing. Lopez-Paz et al.[18] unify two frameworks [11, 38] alization of conventional KD, and also be combined with and extend it to unsupervised, semi-supervised, and multi- other methods to boost the performance due to its comple- task learning scenarios. Radosavovic et al.[26] generate mentarity with conventional KD. In experiments on metric multiple predictions from an example by applying multiple learning, image classification, and few-shot learning, our data transformations on it, then use an ensemble of the pre- approach significantly improves the performance of student dictions as annotations for omni-supervised learning. models. Extensive experiments on the three different tasks With growing interests in KD, task-specific KD meth- show that knowledge lives in the relations indeed, and RKD ods have been proposed for object detection [5, 6, 37], face is effective in transferring the knowledge. model compression [24], and image retrieval and Re-ID [7]. Notably, the work of Chen et al.[7] proposes a KD tech- 2. Related Work nique for metric learning that transfers similarities between images using a rank loss. In the sense that they transfer There has been a long line of research and develop- relational information of ranks, it has some similarity with ment on transferring knowledge from one model to an- ours. Their work, however, is only limited to metric learn- other. Breiman and Shang [3] first proposed to learn single- ing whereas we introduce a general framework for RKD and tree models that approximate the performance of multiple- demonstrate its applicability to various tasks. Furthermore, tree models and provide better interpretability. Similar ap- our experiments on metric learning show that the proposed proaches for neural networks have been emerged in the method outperforms [7] with a significant margin. work of Bucilua et al.[4], Ba and Caruana [1], and Hin- ton et al.[11], mainly for the purpose of model compres- 3. Our Approach sion. Bucilua et al. compress an ensemble of neural net- works into a single neural network. Ba and Caruana [1] In this section we first revisit conventional KD and intro- increase the accuracy of a shallow neural network by train- duce a general form of RKD. Then, two simple yet effective ing it to mimic a deep neural network with penalizing the distillation losses will be proposed as instances of RKD. difference of logits between the two networks. Hinton et al.[11] revive this idea under the name of KD that trains a Notation. Given a teacher model T and a student model student model with the objective of matching the softmax S, we let fT and fS be functions of the teacher and the distribution of a teacher model. Recently, many subsequent student, respectively. Typically the models are deep neural papers have proposed different approaches to KD. Romero networks and in principle the function f can be defined us- et al.[27] distill a teacher using additional linear projection ing output of any layer of the network (e.g., a hidden or layers to train a relatively narrower students. Instead of im- softmax layer). We denote by X N a set of N-tuples of 2 itating output activations of the teacher, Zagoruyko and Ko- distinct data examples, e.g., X = {(xi,xj)|i 6= j} and 3 modakis [47] and Huang and Wang [12] transfer an atten- X = {(xi,xj,xk)|i 6= j 6= k}. tion map of a teacher network into a student, and Tarvainen and Valpola [36] introduce a similar approach using mean 3.1. Conventional knowledge distillation weights. Sau et al.[29] propose a noise-based regularizer In general, conventional KD methods [1, 2, 8, 11, 12, 25, for KD while Lopes et al.[17] introduce data-free KD that 27, 45, 47] can commonly be expressed as minimizing the utilizes metadata of a teacher model. Xu et al.[43] propose objective function: a conditional adversarial network to learn a loss function for KD. Crowley et al.[8] compress a model by grouping LIKD = l fT (xi),fS(xi) , (1) convolution channels of the model and training it with an xi∈X attention transfer. Polino et al.[25] and Mishra and Marr X [20] combine KD with network quantization, which aims to where l is a loss function that penalizes the difference be- reduce bit precision of weights and activations. tween the teacher and the student. A few recent papers [2, 9, 45] have shown that distilling For example, the popular work of Hinton et al.[11] uses a teacher model into a student model of identical architec- pre-softmax outputs for fT and fS, and puts softmax (with ture, i.e., self-distillation, can improve the student over the temperature τ) and Kullback-Leibler divergence for l: teacher. Furlanello et al.[9] and Bagherinezhad et al.[2] f (x ) f (x ) demonstrate it by training the student using softmax out- KL softmax T i , softmax S i . (2) puts of the teacher as ground truth over generations. Yim τ τ xi∈X et al.[45] transfers output activations using Gramian matri- X ces and then fine-tune the student. We also demonstrate that The work of Romero et al.[27] propagates knowledge of RKD strongly benefits from self-distillation. hidden activations by setting fT and fS to be outputs of KD has also been investigated beyond supervised learn- hidden layers, and l to be squared Euclidean distance. As 3968 transfer 휓(푡1, … , 푡푛) Teacher Teacher … … transfer … … 푡 Student Student 1 2 푛 푟 푡1 푡2 … 푡푛 푡 푡 …푡 1 2 푛 1 2 푛 푥 Examples푥 푥 푥 푥Examples푥 1 2 푛 푠 푠1 Model’s푠2 Representation푠푛 푠 푠 푠 푟 Model’s Representation Relational휓(푠 Information1, … , 푠푛) Individual Knowledge Distillation Relational Knowledge Distillation Figure 2: Individual knowledge distillation (IKD) vs. relational knowledge distillation (RKD). While conventional KD (IKD) transfers individual outputs of the teacher directly to the student, RKD extracts relational information using a relational potential function ψ(·), and transfers the information from the teacher to the student.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us