DOCSLIB.ORG
Explore
Sign Up
Log In
Upload
Search
Home
» Tags
» Knowledge distillation
Knowledge distillation
Layer-Level Knowledge Distillation for Deep Neural Network Learning
Similarity Transfer for Knowledge Distillation Haoran Zhao, Kun Gong, Xin Sun, Member, IEEE, Junyu Dong, Member, IEEE and Hui Yu, Senior Member, IEEE
Understanding and Improving Knowledge Distillation
Optimising Hardware Accelerated Neural Networks with Quantisation and a Knowledge Distillation Evolutionary Algorithm
Memory-Replay Knowledge Distillation
Knowledge Distillation: a Survey 3
Towards Effective Utilization of Pre-Trained Language Models
Zero-Shot Knowledge Distillation from a Decision-Based Black-Box Model
MKD: a Multi-Task Knowledge Distillation Approach for Pretrained Language Models
Knowledge Distillation from Internal Representations
Sequence-Level Knowledge Distillation
Improved Knowledge Distillation Via Teacher Assistant
Reinforced Multi-Teacher Selection for Knowledge Distillation
Knowledge Distillation in Deep Learning and Its Applications
Is Label Smoothing Truly Incompatible with Knowledge Distillation: An
Progressive Blockwise Knowledge Distillation for Neural Network Acceleration
Teacher-Student Knowledge Distillation from BERT
Knowledge Distillation and Student-Teacher Learning for Visual Intelligence: a Review and New Outlooks
Top View
Relational Knowledge Distillation
Arxiv:1902.03393V2 [Cs.LG] 17 Dec 2019 Teacher, and the Student Is Then Only Distilled from the Tas
Follow Your Path: a Progressive Method for Knowledge Distillation
Improving Knowledge Distillation Via Category Structure
Knowledge Extraction with No Observable Data
Structured Knowledge Distillation for Dense Prediction
Knowledge Distillation by Sparse Representation Matching
Improve Knowledge Distillation with Better Supervision Tiancheng Wen, Shenqi Lai, Xueming Qian*
Annealing Knowledge Distillation
Lightpaff: a Two-Stage Distillation Frame- Work for Pre-Training and Fine-Tuning
Online Knowledge Distillation Via Collaborative Learning
Understanding Knowledge Distillation
Arxiv:2105.08919V1 [Cs.LG] 19 May 2021 Tribution, Or the Softened Softmax Maddison Et Al., 2016; 1 Introduction Jang Et Al., 2016]
Born-Again Neural Networks
Lifelong Language Knowledge Distillation
Knowledge Distillation in Wide Neural Networks: Risk Bound, Data Efficiency and Imperfect Teacher
A Simple Ensemble Learning Knowledge Distillation
Data-Free Knowledge Distillation for Image Super-Resolution