The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20) Improved Knowledge Distillation via Teacher Assistant Seyed Iman Mirzadeh,∗1 Mehrdad Farajtabar,∗2 Ang Li,2 Nir Levine,2 Akihiro Matsukawa,†3 Hassan Ghasemzadeh1 1Washington State University, WA, USA 2DeepMind, CA, USA 3D.E. Shaw, NY, USA 1{seyediman.mirzadeh, hassan.ghasemzadeh}@wsu.edu 2{farajtabar, anglili, nirlevine}@google.com
[email protected] Abstract Despite the fact that deep neural networks are powerful mod- els and achieve appealing results on many tasks, they are too large to be deployed on edge devices like smartphones or em- bedded sensor nodes. There have been efforts to compress these networks, and a popular method is knowledge distil- lation, where a large (teacher) pre-trained network is used to train a smaller (student) network. However, in this pa- per, we show that the student network performance degrades when the gap between student and teacher is large. Given a Figure 1: TA fills the gap between student & teacher fixed student network, one cannot employ an arbitrarily large teacher, or in other words, a teacher can effectively transfer its knowledge to students up to a certain size, not smaller. To al- by adding a term to the usual classification loss that encour- leviate this shortcoming, we introduce multi-step knowledge ages the student to mimic the teacher’s behavior. distillation, which employs an intermediate-sized network However, we argue that knowledge distillation is not al- (teacher assistant) to bridge the gap between the student and the teacher. Moreover, we study the effect of teacher assistant ways effective, especially when the gap (in size) between size and extend the framework to multi-step distillation.