Deep Transfer Learning with Joint Adaptation Networks

Deep Transfer Learning with Joint Adaptation Networks

Deep Transfer Learning with Joint Adaptation Networks Mingsheng Long 1 Han Zhu 1 Jianmin Wang 1 Michael I. Jordan 2 Abstract shallow transfer learning methods bridge the source and tar- get domains by learning invariant feature representations or Deep networks have been successfully applied to estimating instance importance without using target labels learn transferable features for adapting models (Huang et al., 2006; Pan et al., 2011; Gong et al., 2013). Re- from a source domain to a different target domain. cent deep transfer learning methods leverage deep networks In this paper, we present joint adaptation networks to learn more transferable representations by embedding (JAN), which learn a transfer network by aligning domain adaptation in the pipeline of deep learning, which the joint distributions of multiple domain-specific can simultaneously disentangle the explanatory factors of layers across domains based on a joint maximum variations behind data and match the marginal distributions mean discrepancy (JMMD) criterion. Adversarial across domains (Tzeng et al., 2014; 2015; Long et al., 2015; training strategy is adopted to maximize JMMD 2016; Ganin & Lempitsky, 2015; Bousmalis et al., 2016). such that the distributions of the source and target domains are made more distinguishable. Learning Transfer learning becomes more challenging when domains can be performed by stochastic gradient descent may change by the joint distributions of input features and with the gradients computed by back-propagation output labels, which is a common scenario in practical ap- in linear-time. Experiments testify that our model plications. First, deep networks generally learn the complex yields state of the art results on standard datasets. function from input features to output labels via multilayer feature transformation and abstraction. Second, deep fea- tures in standard CNNs eventually transition from general to 1. Introduction specific along the network, and the transferability of features and classifiers decreases when the cross-domain discrepancy Deep networks have significantly improved the state of the increases (Yosinski et al., 2014). Consequently, after feed- arts for diverse machine learning problems and applications. forwarding the source and target domain data through deep Unfortunately, the impressive performance gains come only networks for multilayer feature abstraction, the shifts in the when massive amounts of labeled data are available for joint distributions of input features and output labels still supervised learning. Since manual labeling of sufficient linger in the network activations of multiple domain-specific training data for diverse application domains on-the-fly is higher layers. Thus we can use the joint distributions of the often prohibitive, for a target task short of labeled data, activations in these domain-specific layers to approximately there is strong motivation to build effective learners that can reason about the original joint distributions, which should leverage rich labeled data from a different source domain. be matched across domains to enable domain adaptation. To However, this learning paradigm suffers from the shift in date, this problem has not been addressed in deep networks. data distributions across different domains, which poses a In this paper, we present Joint Adaptation Networks (JAN) arXiv:1605.06636v2 [cs.LG] 17 Aug 2017 major obstacle in adapting predictive models for the target to align the joint distributions of multiple domain-specific task (Quionero-Candela et al., 2009; Pan & Yang, 2010). layers across domains for unsupervised domain adaptation. Learning a discriminative model in the presence of the shift JAN largely extends the ability of deep adaptation networks between training and test distributions is known as transfer (Long et al., 2015) to reason about the joint distributions learning or domain adaptation (Pan & Yang, 2010). Previous as mentioned above, while keeping the training procedure even simpler. Specifically, JAN admits a simple transfer 1 Key Lab for Information System Security, MOE; Tsinghua Na- pipeline, which processes the source and target domain data tional Lab for Information Science and Technology (TNList); NEL- BDS; School of Software, Tsinghua University, Beijing 100084, by convolutional neural networks (CNN) and then aligns China 2University of California, Berkeley, Berkeley 94720. Corre- the joint distributions of activations in multiple task-specific spondence to: Mingsheng Long <[email protected]>. layers. To learn parameters and enable alignment, we derive joint maximum mean discrepancy (JMMD), which measures th Proceedings of the 34 International Conference on Machine the Hilbert-Schmidt norm between kernel mean embedding Learning, Sydney, Australia, PMLR 70, 2017. Copyright 2017 by the author(s). of empirical joint distributions of source and target data. Deep Transfer Learning with Joint Adaptation Networks Thanks to a linear-time unbiased estimate of JMMD, we can the problem tractable (Zhang et al., 2013). As it is not easy easily draw a mini-batch of samples to estimate the JMMD to justify which components of the joint distribution are criterion, and implement it efficiently via back-propagation. changing in practice, our work is transparent to diverse sce- We further maximize JMMD using adversarial training strat- narios by directly manipulating the joint distribution without egy such that the distributions of source and target domains assumptions on the marginal and conditional distributions. are made more distinguishable. Empirical study shows that Furthermore, it remains unclear how to account for the shift our models yield state of the art results on standard datasets. in joint distributions within the regime of deep architectures. 2. Related Work 3. Preliminary Transfer learning (Pan & Yang, 2010) aims to build learning 3.1. Hilbert Space Embedding machines that generalize across different domains following We begin by providing an overview of Hilbert space embed- different probability distributions (Sugiyama et al., 2008; dings of distributions, where each distribution is represented Pan et al., 2011; Duan et al., 2012; Gong et al., 2013; Zhang by an element in a reproducing kernel Hilbert space (RKHS). et al., 2013). Transfer learning finds wide applications in Denote by X a random variable with domain Ω and distribu- computer vision (Saenko et al., 2010; Gopalan et al., 2011; tion P (X), and by x the instantiations of X. A reproducing Gong et al., 2012; Hoffman et al., 2014) and natural lan- kernel Hilbert space (RKHS) H on Ω endowed by a kernel guage processing (Collobert et al., 2011; Glorot et al., 2011). k (x; x0) is a Hilbert space of functions f :Ω 7! R with The main technical problem of transfer learning is how inner product h·; ·iH. Its element k (x; ·) satisfies the repro- to reduce the shifts in data distributions across domains. ducing property: hf (·) ; k (x; ·)iH = f (x). Alternatively, Most existing methods learn a shallow representation model k (x; ·) can be viewed as an (infinite-dimensional) implicit 0 0 by which domain discrepancy is minimized, which cannot feature map φ (x) where k (x; x ) = hφ (x) ; φ (x )iH. Ker- suppress domain-specific exploratory factors of variations. nel functions can be defined on vector space, graphs, time Deep networks learn abstract representations that disentan- series and structured objects to handle diverse applications. gle the explanatory factors of variations behind data (Bengio The kernel embedding represents a probability distribution et al., 2013) and extract transferable factors underlying dif- P by an element in RKHS endowed by a kernel k (Smola ferent populations (Glorot et al., 2011; Oquab et al., 2013), et al., 2007; Sriperumbudur et al., 2010; Gretton et al., 2012) which can only reduce, but not remove, the cross-domain discrepancy (Yosinski et al., 2014). Recent work on deep Z µX (P ) , EX [φ (X)] = φ (x) dP (x); (1) domain adaptation embeds domain-adaptation modules into Ω deep networks to boost transfer performance (Tzeng et al., 2014; 2015; 2017; Ganin & Lempitsky, 2015; Long et al., where the distribution is mapped to the expected feature map, 0 2015; 2016). These methods mainly correct the shifts in i.e. to a point in the RKHS, given that EX [k (x; x )] 6 1. marginal distributions, assuming conditional distributions The mean embedding µX has the property that the expecta- remain unchanged after the marginal distribution adaptation. tion of any RKHS function f can be evaluated as an inner product in H, hµ ; fi [f (X)] ; 8f 2 H. This kind Transfer learning will become more challenging as domains X H , EX of kernel mean embedding provides us a nonparametric per- may change by the joint distributions P (X; Y) of input fea- spective on manipulating distributions by drawing samples tures X and output labels Y. The distribution shifts may from them. We will require a characteristic kernel k such stem from the marginal distributions P (X) (a.k.a. covari- that the kernel embedding µ (P ) is injective, and that the ate shift (Huang et al., 2006; Sugiyama et al., 2008)), the X embedding of distributions into infinite-dimensional feature conditional distributions P (YjX) (a.k.a. conditional shift spaces can preserve all of the statistical features of arbitrary (Zhang et al., 2013)), or both (a.k.a. dataset shift (Quionero- distributions, which removes the necessity of density estima- Candela et al., 2009)). Another line of work (Zhang et al., tion of P . This technique has been widely applied in many 2013; Wang & Schneider, 2014) correct both target and con- tasks, including feature extraction, density estimation and ditional shifts based on the theory of kernel embedding of two-sample test (Smola et al., 2007; Gretton et al., 2012). conditional distributions (Song et al., 2009; 2010; Sriperum- budur et al., 2010). Since the target labels are unavailable, While the true distribution P (X) is rarely accessible, we adaptation is performed by minimizing the discrepancy be- can estimate its embedding using a finite sample (Gretton tween marginal distributions instead of conditional distri- et al., 2012). Given a sample DX = fx1;:::; xng of size n butions.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us