Network Morphism

Network Morphism

Network Morphism Tao Weiy [email protected] Changhu Wang [email protected] Yong Rui [email protected] Chang Wen Chen [email protected] Microsoft Research, Beijing, China, 100080. Department of Computer Science and Engineering, University at Buffalo, Buffalo, NY, 14260. Abstract Parent Network We present in this paper a systematic study on how to morph a well-trained neural network to a new one so that its network function can be completely preserved. We define this as net- work morphism in this research. After morphing a parent network, the child network is expected Child Network to inherit the knowledge from its parent network and also has the potential to continue growing into a more powerful one with much shortened training time. The first requirement for this net- work morphism is its ability to handle diverse morphing types of networks, including changes Figure 1: Illustration of network morphism. The child net- of depth, width, kernel size, and even subnet. work is expected to inherit the entire knowledge from the To meet this requirement, we first introduce the parent network with the network function preserved. A va- network morphism equations, and then develop riety of morphing types are illustrated. The change of seg- novel morphing algorithms for all these morph- ment AC represents the depth morphing: s ! s + t; the ing types for both classic and convolutional neu- inflated node r involves width and kernel size morphing; a ral networks. The second requirement for this subnet is embedded in segment CD, which is subnet mor- network morphism is its ability to deal with non- phing. Complex network morphism can also be achieved linearity in a network. We propose a family of with a combination of these basic morphing operations. parametric-activation functions to facilitate the morphing of any continuous non-linear activa- tion neurons. Experimental results on benchmark datasets and typical neural networks demonstrate 2015), and semantic segmentation (Long et al., 2014). arXiv:1603.01670v2 [cs.LG] 8 Mar 2016 the effectiveness of the proposed network mor- However, training such a network is very time-consuming. phism scheme. It usually takes weeks or even months to train an effective deep network, let alone the exploration of diverse network settings. It is very much desired for these well-trained net- 1. Introduction works to be directly adopted for other related applications with minimum retraining. Deep convolutional neural networks (DCNNs) have achieved state-of-the-art results on diverse computer vision To accomplish such an ideal goal, we need to systemati- tasks such as image classification (Krizhevsky et al., 2012; cally study how to morph a well-trained neural network to Simonyan & Zisserman, 2014; Szegedy et al., 2014), object a new one with its network function completely preserved. detection (Girshick et al., 2014; Girshick, 2015; Ren et al., We call such operations network morphism. Upon comple- tion of such morphism, the child network shall not only in- herit the entire knowledge from the parent network, but also y Tao Wei performed this work while being an intern at Microsoft be capable of growing into a more powerful one in much Research Asia. shortened training time as the process continues on. This is Network Morphism fundamentally different from existing work related to net- deal with the non-linearity, we introduce the concept of work knowledge transferring, which either tries to mimic a parametric-activation function family, which is defined as parent network’s outputs (Bucilu et al., 2006; Ba & Caru- an adjoint function family for arbitrary non-linear activa- ana, 2014; Romero et al., 2014), or pre-trains to facilitate tion function. It can reduce the non-linear operation to a the convergence and/or adapt to new datasets with possible linear one with a parameter that can be learned. Therefore, total change in network function (Simonyan & Zisserman, the network morphism of any continuous non-linear activa- 2014; Oquab et al., 2014). tion neurons can be solved. Mathematically, a morphism is a structure-preserving map To the best of our knowledge, this is the first work about from one mathematical structure to another (Weisstein, network morphism, except the recent work (Chen et al., 2002). In the context of neural networks, network mor- 2015) that introduces the IdMorph. We conduct extensive phism refers to a parameter-transferring map from a parent experiments to show the effectiveness of the proposed net- network to a child network that preserves its function and work morphism learning scheme on widely used bench- outputs. Although network morphism generally does not mark datasets for both classic and convolutional neural net- impose constraints on the architecture of the child network, works. The effectiveness of basic morphing operations we limit the investigation of network morphism to the ex- are also verified. Furthermore, we show that the proposed panding mode, which intuitively means that the child net- network morphism is able to internally regularize the net- work is deeper and/or wider than its parent network. Fig. work, that typically leads to an improved performance. 1 illustrates the concept of network morphism, where a va- Finally, we also successfully morph the well-known 16- riety of morphing types are demonstrated including depth layered VGG net (Simonyan & Zisserman, 2014) to a better 1 morphing, width morphing, kernel size morphing, and sub- performing model, with only 15 of the training time com- net morphing. In this work, we derive network morphism paring against the training from scratch. equations for a successful morphing operation to follow, based on which novel network morphism algorithms can be 2. Related Work developed for all these morphing types. The proposed al- gorithms work for both classic multi-layer perceptron mod- We briefly introduce recent work related to network mor- els and convolutional neural networks. Since in the pro- phism and identify the differences from this work. posed network morphism it is required that the output is Mimic Learning. A series of work trying to mimic the unchanged, a complex morphing can be decomposed into teacher network with a student network have been devel- basic morphing steps, and thus can be solved easily. oped, which usually need learning from scratch. For exam- Depth morphing is an important morphing type, since cur- ple, (Bucilu et al., 2006) tried to train a lighter network by rent top-notch neural networks are going deeper and deeper mimicking an ensemble network. (Ba & Caruana, 2014) (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; extended this idea, and used a shallower but wider net- Szegedy et al., 2014; He et al., 2015a). One heuristic ap- work to mimic a deep and wide network. In (Romero et al., proach is to embed an identity mapping layer into the par- 2014), the authors adopted a deeper but narrower network ent network, which is referred as IdMorph. IdMorph is ex- to mimic a deep and wide network. The proposed network plored by a recent work (Chen et al., 2015), but is poten- morphism scheme is different from these algorithms, since tially problematic due to the sparsity of the identity layer, instead of mimicking, its goal is to make the child net- and might fail sometimes (He et al., 2015a). To overcome work directly inherit the intact knowledge (network func- the issues associated with IdMorph, we introduce several tion) from the parent network. This allows network mor- practices for the morphism operation to follow, and pro- phism to achieve the same performance. That is why the pose a deconvolution-based algorithm for network depth networks are called parent and child, instead of teacher and morphing. This algorithm is able to asymptotically fill in student. Another major difference is that the child network all parameters with non-zero elements. In its worst case, is not learned from scratch. the non-zero occupying rate of the proposed algorithm is Pre-training and Transfer Learning. Pre-training (Si- still higher than IdMorph for an order of magnitude. monyan & Zisserman, 2014) is a strategy proposed to fa- Another challenge the proposed network morphism will cilitate the convergence of very deep neural networks, and face is the dealing of the non-linearity in a neural network. transfer learning2 (Simonyan & Zisserman, 2014; Oquab Even the simple IdMorph method fails in this case, because et al., 2014) is introduced to overcome the overfitting prob- it only works for idempotent functions1. In this work, to lem when training large neural networks on relatively small 1An idempotent function ' is defined to satisfy ' ◦ ' = '. 2Although transfer learning in its own concept is very general, This condition passes the ReLU function but fails on most of other here it is referred as a technique used for DCNNs to pre-train the commonly used activation functions, such as Sigmoid and TanH. model on one dataset and then adapt to another. Network Morphism datasets. They both re-initialize the last few layers of the parent network with the other layers remaining the same (or refined in a lighter way). Their difference is that pre- training continues to train the child network on the same dataset, while transfer learning continues on a new one. However, these two strategies totally alter the parameters in the last few layers, as well as the network function. Net2Net. Net2Net is a recent work proposed in (Chen et al., 2015). Although it targets at the same problem, there are several major differences between network morphism and Net2Net. First, the solution of Net2Net is still restricted to the IdMorph approach, while NetMorph is the first to make it possible to embed non-identity layers. Second, Figure 2: Network morphism linear. B∗ represents blobs Net2Net’s operations only work for idempotent activation (hidden units), G and F∗ are convolutional filters (weight functions, while NetMorph is the first to handle arbitrary matrices) for DCNNs (classic neural networks).

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    11 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us