Localized Structured Prediction

Total Page:16

File Type:pdf, Size:1020Kb

Localized Structured Prediction Localized Structured Prediction Carlo Ciliberto1, Francis Bach2;3, Alessandro Rudi2;3 1 Department of Electrical and Electronic Engineering, Imperial College London, London 2 Département d’informatique, Ecole normale supérieure, PSL Research University. 3 INRIA, Paris, France Supervised Learning 101 • X input space, Y output space, • ` : Y × Y ! R loss function, • ρ probability on X × Y. f ? = argmin E[`(f(x); y)]; f:X !Y n given only the dataset (xi; yi)i=1 sampled independently from ρ. 1 Structured Prediction 2 If Y is a vector space • G easy to choose/optimize: (generalized) linear models, Kernel methods, Neural Networks, etc. If Y is a “structured” space: • How to choose G? How to optimize over it? Protypical Approach: Empirical Risk Minimization Solve the problem: Xn b 1 f = argmin `(f(xi); yi) + λR(f): 2G n f i=1 Where G ⊆ ff : X ! Yg (usually a convex function space) 3 If Y is a “structured” space: • How to choose G? How to optimize over it? Protypical Approach: Empirical Risk Minimization Solve the problem: Xn b 1 f = argmin `(f(xi); yi) + λR(f): 2G n f i=1 Where G ⊆ ff : X ! Yg (usually a convex function space) If Y is a vector space • G easy to choose/optimize: (generalized) linear models, Kernel methods, Neural Networks, etc. 3 Protypical Approach: Empirical Risk Minimization Solve the problem: Xn b 1 f = argmin `(f(xi); yi) + λR(f): 2G n f i=1 Where G ⊆ ff : X ! Yg (usually a convex function space) If Y is a vector space • G easy to choose/optimize: (generalized) linear models, Kernel methods, Neural Networks, etc. If Y is a “structured” space: • How to choose G? How to optimize over it? 3 State of the art: Structured case b Y arbitrary: how do we parametrize G and learn f? Surrogate approaches + Clear theory (e.g. convergence and learning rates) - Only for special cases (classification, ranking, multi-labeling etc.) [Bartlett et al., 2006, Duchi et al., 2010, Mroueh et al., 2012] Score learning techniques + General algorithmic framework (e.g. StructSVM [Tsochantaridis et al., 2005]) - Limited Theory (no consistency, see e.g. [Bakir et al., 2007] ) 4 Is it possible to have best of both worlds? general algorithmic framework + clear theory 5 Table of contents 1. A General Framework for Structured Prediction [Ciliberto et al., 2016] 2. Leveraging Local Structure [This Work] 6 A General Framework for Structured Prediction Pointwise characterization in terms of the conditional expectation: ? f (x) = argmin Ey[`(z; y) j x]: z2Y Characterizing the target function ? f = argmin Exy[`(f(x); y)]: f:X !Y 7 Characterizing the target function ? f = argmin Exy[`(f(x); y)]: f:X !Y Pointwise characterization in terms of the conditional expectation: ? f (x) = argmin Ey[`(z; y) j x]: z2Y 7 Deriving an Estimator Idea: approximate ? f (x) = argmin E(z; x) E(z; x) = Ey[`(z; y) j x] z2Y b by means of an estimator E(z; x) of the ideal E(z; x) fb(x) = argmin Eb(z; x) Eb(z; x) ≈ E(z; x) z2Y b n Question: How to choose E(z; x) given the dataset (xi; yi)i=1? 8 Estimating the Conditional Expectation Idea: for every z perform “regression” over the `(z; ·). 1 Xn gbz = argmin L(g(xi); `(z; yi)) + λR(g) X!R n g: i=1 b Then we take E(z; x) = gbz(x). Questions: • Models: How to choose L? • Computations: Do we need to compute gbz for every z 2 Y? b b • Theory: Does E(z; x) ! E(z; x)? More generally, does f ! f ?? 9 Square Loss! Let L be the square loss. Then: Xn 1 2 2 gbz = argmin (g(xi) − `(z; yi)) + λkgk g n i=1 In particular, for linear models g(x) = ϕ(x)>w > 2 2 gbz(x) = ϕ(x) wbz wbz = argmin Aw − b + λ w w With > > A = [ϕ(x1); : : : ; ϕ(xn)] and b = [`(z; y1); : : : ; `(z; yn)] 10 Computing the gbz All in Once Closed form solution b > b > > −1 > > gz(x) = ϕ(x) wz = |ϕ(x) (A A{z+ λnI) A } b = α(x) b α(x) In particular, we can compute > > −1 αi(x) = ϕ(x) (A A + λnI) ϕ(xi) only once (independently of z). Then, for any z Xn Xn gbz(x) = αi(x)bi = αi(x)`(z; yi) i=1 i=1 11 Structured Prediction Algorithm n Input: dataset (xi; yi)i=1. Training: for i = 1; : : : ; n, compute > −1 vi = (A A + λnI) ϕ(xi) Prediction: given a new test point x compute > αi(x) = ϕ(x) vi Then, Xn b f(x) = argmin αi(x)`(z; yi) 2Y z i=1 12 Theorem (Rates - [Ciliberto et al., 2016]) Under mild assumption on `. Let λ = n−1=2, then E[`(fb(x); y) − `(f ?(x); y)] ≤ O(n−1=4); w:h:p: The Proposed Structured Prediction Algorithm Questions: • Models: How to choose L? Square loss! • Computations: Do we need to compute gbz for every z 2 Y? No need, Compute them all in once! b • Theory: Does f ! f ?? Yes! 13 The Proposed Structured Prediction Algorithm Questions: • Models: How to choose L? Square loss! • Computations: Do we need to compute gbz for every z 2 Y? No need, Compute them all in once! b • Theory: Does f ! f ?? Yes! Theorem (Rates - [Ciliberto et al., 2016]) Under mild assumption on `. Let λ = n−1=2, then E[`(fb(x); y) − `(f ?(x); y)] ≤ O(n−1=4); w:h:p: 13 A General Framework for Structured Prediction (General Algorithm + Theory) Is it possible to have best of both worlds? Yes! We introduced an algorithmic framework for structured prediction: • Directly applicable on a wide family of problems (Y; `). • With strong theoretical guarantees. • Recovering many existing algorithms (not seen here). 14 What Am I Hiding? • Theory. The key assumption to achieve consistency and rates is that ` is a Structure Encoding Loss Function (SELF). `(z; y) = h (z);'(y)iH 8z; y 2 Y With ; ' : Y!H continuous maps into H Hilbert. • Similar to the characterization of reproducing kernels. • In principle hard to verify. However lots of ML losses satisfy it! • Computations. We need to solve an optimization problem at prediction time! 15 Prediction: The Inference Problem Solving an optimization problem at prediction time is a standard practice in structured prediction. Known as Inference Problem fb(x) = argmin Eb(x; z) z2Y In our case it is reminiscient of a weighted barycenter. Xn b f(x) = argmin αi(x)`(z; yi) 2Y z i=1 It is *very* problem dependent 16 Example: Learning to Rank Goal: given a query x, order a set of documents d1; : : : ; dk according to their relevance scores y1; : : : ; yk w.r.t. x. Xk Pair-wise Loss: `rank(f(x); y) = (yi − yj) sign(f(x)i − f(x)j) i;j=1 P b n It can be shown that f(x) = argminz2Y i=1 αi(x)`(z; yi) is a Minimum Feedback Arc Set problem on DAGs (NP Hard!) Still, approximate solutions can improve upon non-consistent approaches. 17 Additional Work Case studies: • Learning to rank [Korba et al., 2018] • Output Fisher Embeddings [Djerrab et al., 2018] • Y = manifolds, ` = geodesic distance [Rudi et al., 2018] • Y = probability space, ` = wasserstein distance [Luise et al., 2018] Refinements of the analysis: • Alternative derivations [Osokin et al., 2017] • Discrete loss [Nowak-Vila et al., 2018, Struminsky et al., 2018] Extensions: • Application to multitask-learning [Ciliberto et al., 2017] • Beyond least squares surrogate [Nowak-Vila et al., 2019] • Regularizing with trace norm [Luise et al., 2019] 18 Predicting Probability Distributions [Luise, Rudi, Pontil, Ciliberto ’18] Setting: Y = P(Rd) probability distributions on Rd. Loss: Wasserstein distance Z `(µ, ν) = min kz − yk2 dτ(x; y) τ2Π(µ,ν) Digit Reconstruction 19 Manifold Regression [Rudi, Ciliberto, Marconi, Rosasco ’18] Setting: Y Riemmanian manifold. Loss: (squared) geodesic distance. Optimization: Riemannian GD. Fingerprint Reconstruction Multi-labeling (Y = S1 sphere) (Y statistical manifold) 20 Nonlinear Multi-task Learning [Ciliberto, Rudi, Rosasco, Pontil ’17, Luise, Stamos, Pontil, Ciliberto ’19 ] Idea: instead of solving multiple learning problems (tasks) separately, leverage the potential relations among them. Previous Methods: only imposing/learning linear tasks relations. Unable to cope with non-linear constraints (e.g. ranking, robotics, etc.). MTL+Structured Prediction − Interpret multiple tasks as separate outputs. − Impose constraints as structure on the joint output. 21 Leveraging local structure Local Structure 22 Motivating Example (Between-Locality) Super-Resolution: Learn f : Lowres ! Highres. However... • Very large output sets (high sample complexity). • Local info might be sufficient to predict output. 23 Motivating Example (Between-Locality) Idea: learn local input-output maps under structural constraints (i.e. overlapping output patches should line up) Super-Resolution: Learn f : Lowres ! Highres. Between-Locality. Let [x]p; [y]p denote input/output “parts” p 2 P : ( ) ( ) • P [y] x = P [y] [x] ( p ) (p p ) • P [y]p [x]p = P [y]q [x]q 24 Structured Prediction + Parts Assumption. The loss is “aware” of the parts. X 0 0 `(y ; y) = `0([y ]p; [y]p) p2P • set P indicizes the parts of X and Y • `0 loss on parts • [y]p is the p-th part of y 25 Localized Structured Prediction: Inference Input Output x<latexit sha1_base64="f2yzimwbR/Dgjzp6tZ360fHRqNI=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cW7Ae0oWy2k3btZhN2N2IJ/QVePCji1Z/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3H1FpHst7M0nQj+hQ8pAzaqzUeOqXK27VnYOsEi8nFchR75e/eoOYpRFKwwTVuuu5ifEzqgxnAqelXqoxoWxMh9i1VNIItZ/ND52SM6sMSBgrW9KQufp7IqOR1pMosJ0RNSO97M3E/7xuasJrP+MySQ1KtlgUpoKYmMy+JgOukBkxsYQyxe2thI2ooszYbEo2BG/55VXSuqh6btVrXFZqN3kcRTiBUzgHD66gBndQhyYwQHiGV3hzHpwX5935WLQWnHzmGP7A+fwB5jmM/A==</latexit> z<latexit sha1_base64="kAAWtYBn7/0aoVBmFqz2tm+nEUw=">AAAB6HicbZC7SwNBEMbnfMbzFbW0WQyCVbiz0UYM2lgmYB6QhLC3mUvW7O0du3tCPAL2NhaK2PrP2Nv537h5FJr4wcKP75thZyZIBNfG876dpeWV1bX13Ia7ubW9s5vf26/pOFUMqywWsWoEVKPgEquGG4GNRCGNAoH1YHA9zuv3qDSP5a0ZJtiOaE/ykDNqrFV56OQLXtGbiCyCP4PC5ad78QgA5U7+q9WNWRqhNExQrZu+l5h2RpXhTODIbaU
Recommended publications
  • Benchmarking Approximate Inference Methods for Neural Structured Prediction
    Benchmarking Approximate Inference Methods for Neural Structured Prediction Lifu Tu Kevin Gimpel Toyota Technological Institute at Chicago, Chicago, IL, 60637, USA {lifu,kgimpel}@ttic.edu Abstract The second approach is to retain computationally-intractable scoring functions Exact structured inference with neural net- but then use approximate methods for inference. work scoring functions is computationally challenging but several methods have been For example, some researchers relax the struc- proposed for approximating inference. One tured output space from a discrete space to a approach is to perform gradient descent continuous one and then use gradient descent to with respect to the output structure di- maximize the score function with respect to the rectly (Belanger and McCallum, 2016). An- output (Belanger and McCallum, 2016). Another other approach, proposed recently, is to train approach is to train a neural network (an “infer- a neural network (an “inference network”) to ence network”) to output a structure in the relaxed perform inference (Tu and Gimpel, 2018). In this paper, we compare these two families of space that has high score under the structured inference methods on three sequence label- scoring function (Tu and Gimpel, 2018). This ing datasets. We choose sequence labeling idea was proposed as an alternative to gradient because it permits us to use exact inference descent in the context of structured prediction as a benchmark in terms of speed, accuracy, energy networks (Belanger and McCallum, 2016). and search error. Across datasets, we demon- In this paper, we empirically compare exact in- strate that inference networks achieve a better ference, gradient descent, and inference networks speed/accuracy/search error trade-off than gra- dient descent, while also being faster than ex- for three sequence labeling tasks.
    [Show full text]
  • Learning Distributed Representations for Structured Output Prediction
    Learning Distributed Representations for Structured Output Prediction Vivek Srikumar∗ Christopher D. Manning University of Utah Stanford University [email protected] [email protected] Abstract In recent years, distributed representations of inputs have led to performance gains in many applications by allowing statistical information to be shared across in- puts. However, the predicted outputs (labels, and more generally structures) are still treated as discrete objects even though outputs are often not discrete units of meaning. In this paper, we present a new formulation for structured predic- tion where we represent individual labels in a structure as dense vectors and allow semantically similar labels to share parameters. We extend this representation to larger structures by defining compositionality using tensor products to give a natural generalization of standard structured prediction approaches. We define a learning objective for jointly learning the model parameters and the label vectors and propose an alternating minimization algorithm for learning. We show that our formulation outperforms structural SVM baselines in two tasks: multiclass document classification and part-of-speech tagging. 1 Introduction In recent years, many computer vision and natural language processing (NLP) tasks have benefited from the use of dense representations of inputs by allowing superficially different inputs to be related to one another [26, 9, 7, 4]. For example, even though words are not discrete units of meaning, tradi- tional NLP models use indicator features for words. This forces learning algorithms to learn separate parameters for orthographically distinct but conceptually similar words. In contrast, dense vector representations allow sharing of statistical signal across words, leading to better generalization.
    [Show full text]
  • Lecture Seq2seq2 2019.Pdf
    10707 Deep Learning Russ Salakhutdinov Machine Learning Department [email protected] http://www.cs.cmu.edu/~rsalakhu/10707/ Sequence to Sequence II Slides borrowed from ICML Tutorial Seq2Seq ICML Tutorial Oriol Vinyals and Navdeep Jaitly @OriolVinyalsML | @NavdeepLearning Site: https://sites.google.com/view/seq2seq-icml17 Sydney, Australia, 2017 Applications Sentence to Constituency Parse Tree 1. Read a sentence 2. Flatten the tree into a sequence (adding (,) ) 3. “Translate” from sentence to parse tree Vinyals, O., et al. “Grammar as a foreign language.” NIPS (2015). Speech Recognition p(yi+1|y1..i, x) y1..i Decoder / transducer yi+1 Transcript f(x) Cancel cancel cancel Chan, W., Jaitly, N., Le, Q., Vinyals, O. “Listen Attend and Spell.” ICASSP (2015). Attention Example prediction derived from Attention vector - where the “attending” to segment model thinks the relevant of input information is to be found time Chan, W., Jaitly, N., Le, Q., Vinyals, O. “Listen Attend and Spell.” ICASSP (2015). Attention Example time Chan, W., Jaitly, N., Le, Q., Vinyals, O. “Listen Attend and Spell.” ICASSP (2015). Attention Example time Chan, W., Jaitly, N., Le, Q., Vinyals, O. “Listen Attend and Spell.” ICASSP (2015). Attention Example time Chan, W., Jaitly, N., Le, Q., Vinyals, O. “Listen Attend and Spell.” ICASSP (2015). Attention Example time Chan, W., Jaitly, N., Le, Q., Vinyals, O. “Listen Attend and Spell.” ICASSP (2015). Attention Example time Chan, W., Jaitly, N., Le, Q., Vinyals, O. “Listen Attend and Spell.” ICASSP (2015). Attention Example Chan, W., Jaitly, N., Le, Q., Vinyals, O. “Listen Attend and Spell.” ICASSP (2015). Caption Generation with Visual Attention A man riding a horse in a field.
    [Show full text]
  • Neural Networks for Linguistic Structured Prediction and Their Interpretability
    Neural Networks for Linguistic Structured Prediction and Their Interpretability Xuezhe Ma CMU-LTI-20-001 Language Technologies Institute School of Computer Science Carnegie Mellon University 5000 Forbes Ave., Pittsburgh, PA 15213 www.lti.cs.cmu.edu Thesis Committee: Eduard Hovy (Chair), Carnegie Mellon University Jaime Carbonell, Carnegie Mellon University Yulia Tsvetkov, Carnegie Mellon University Graham Neubig, Carnegie Mellon University Joakim Nivre, Uppsala University Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy n Language and Information Technologies Copyright c 2020 Xuezhe Ma Abstract Linguistic structured prediction, such as sequence labeling, syntactic and seman- tic parsing, and coreference resolution, is one of the first stages in deep language understanding and its importance has been well recognized in the natural language processing community, and has been applied to a wide range of down-stream tasks. Most traditional high performance linguistic structured prediction models are linear statistical models, including Hidden Markov Models (HMM) and Conditional Random Fields (CRF), which rely heavily on hand-crafted features and task-specific resources. However, such task-specific knowledge is costly to develop, making struc- tured prediction models difficult to adapt to new tasks or new domains. In the past few years, non-linear neural networks with as input distributed word representations have been broadly applied to NLP problems with great success. By utilizing distributed representations as inputs, these systems are capable of learning hidden representations directly from data instead of manually designing hand-crafted features. Despite the impressive empirical successes of applying neural networks to linguis- tic structured prediction tasks, there are at least two major problems: 1) there is no a consistent architecture for, at least of components of, different structured prediction tasks that is able to be trained in a truely end-to-end setting.
    [Show full text]
  • Pystruct - Learning Structured Prediction in Python Andreas C
    Journal of Machine Learning Research 15 (2014) 2055-2060 Submitted 8/13; Revised 2/14; Published 6/14 PyStruct - Learning Structured Prediction in Python Andreas C. M¨uller [email protected] Sven Behnke [email protected] Institute of Computer Science, Department VI University of Bonn Bonn, Germany Editor: Mark Reid Abstract Structured prediction methods have become a central tool for many machine learning ap- plications. While more and more algorithms are developed, only very few implementations are available. PyStruct aims at providing a general purpose implementation of standard structured prediction methods, both for practitioners and as a baseline for researchers. It is written in Python and adapts paradigms and types from the scientific Python community for seamless integration with other projects. Keywords: structured prediction, structural support vector machines, conditional ran- dom fields, Python 1. Introduction In recent years there has been a wealth of research in methods for learning structured prediction, as well as in their application in areas such as natural language processing and computer vision. Unfortunately only few implementations are publicly available|many applications are based on the non-free implementation of Joachims et al. (2009). PyStruct aims at providing a high-quality implementation with an easy-to-use inter- face, in the high-level Python language. This allows practitioners to efficiently test a range of models, as well as allowing researchers to compare to baseline methods much more easily than this is possible with current implementations. PyStruct is BSD-licensed, allowing modification and redistribution of the code, as well as use in commercial applications.
    [Show full text]
  • Conditional Random Field Autoencoders for Unsupervised Structured Prediction
    Conditional Random Field Autoencoders for Unsupervised Structured Prediction Waleed Ammar Chris Dyer Noah A. Smith School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213, USA {wammar,cdyer,nasmith}@cs.cmu.edu Abstract We introduce a framework for unsupervised learning of structured predictors with overlapping, global features. Each input’s latent representation is predicted con- ditional on the observed data using a feature-rich conditional random field (CRF). Then a reconstruction of the input is (re)generated, conditional on the latent struc- ture, using a generative model which factorizes similarly to the CRF. The autoen- coder formulation enables efficient exact inference without resorting to unrealistic independence assumptions or restricting the kinds of features that can be used. We illustrate connections to traditional autoencoders, posterior regularization, and multi-view learning. We then show competitive results with instantiations of the framework for two canonical tasks in natural language processing: part-of-speech induction and bitext word alignment, and show that training the proposed model can be substantially more efficient than a comparable feature-rich baseline. 1 Introduction Conditional random fields [24] are used to model structure in numerous problem domains, includ- ing natural language processing (NLP), computational biology, and computer vision. They enable efficient inference while incorporating rich features that capture useful domain-specific insights. De- spite their ubiquity in supervised
    [Show full text]
  • Conditional Random Field Autoencoders for Unsupervised Structured Prediction
    Conditional Random Field Autoencoders for Unsupervised Structured Prediction Waleed Ammar Chris Dyer Noah A. Smith School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213, USA fwammar,cdyer,[email protected] Abstract We introduce a framework for unsupervised learning of structured predictors with overlapping, global features. Each input’s latent representation is predicted con- ditional on the observed data using a feature-rich conditional random field (CRF). Then a reconstruction of the input is (re)generated, conditional on the latent struc- ture, using a generative model which factorizes similarly to the CRF. The autoen- coder formulation enables efficient exact inference without resorting to unrealistic independence assumptions or restricting the kinds of features that can be used. We illustrate connections to traditional autoencoders, posterior regularization, and multi-view learning. We then show competitive results with instantiations of the framework for two canonical tasks in natural language processing: part-of-speech induction and bitext word alignment, and show that training the proposed model can be substantially more efficient than a comparable feature-rich baseline. 1 Introduction Conditional random fields [24] are used to model structure in numerous problem domains, includ- ing natural language processing (NLP), computational biology, and computer vision. They enable efficient inference while incorporating rich features that capture useful domain-specific insights. De- spite their ubiquity in supervised
    [Show full text]
  • An Introduction to Conditional Random Fields
    Foundations and Trends R in sample Vol. xx, No xx (xxxx) 1{87 c xxxx xxxxxxxxx DOI: xxxxxx An Introduction to Conditional Random Fields Charles Sutton1 and Andrew McCallum2 1 EdinburghEH8 9AB, UK, [email protected] 2 Amherst, MA01003, USA, [email protected] Abstract Often we wish to predict a large number of variables that depend on each other as well as on other observed variables. Structured predic- tion methods are essentially a combination of classification and graph- ical modeling, combining the ability of graphical models to compactly model multivariate data with the ability of classification methods to perform prediction using large sets of input features. This tutorial de- scribes conditional random fields, a popular probabilistic method for structured prediction. CRFs have seen wide application in natural lan- guage processing, computer vision, and bioinformatics. We describe methods for inference and parameter estimation for CRFs, including practical issues for implementing large scale CRFs. We do not assume previous knowledge of graphical modeling, so this tutorial is intended to be useful to practitioners in a wide variety of fields. Contents 1 Introduction 1 2 Modeling 5 2.1 Graphical Modeling 6 2.2 Generative versus Discriminative Models 10 2.3 Linear-chain CRFs 18 2.4 General CRFs 21 2.5 Applications of CRFs 23 2.6 Feature Engineering 24 2.7 Notes on Terminology 26 3 Inference 27 3.1 Linear-Chain CRFs 28 3.2 Inference in Graphical Models 32 3.3 Implementation Concerns 40 4 Parameter Estimation 43 i ii Contents 4.1 Maximum Likelihood 44 4.2 Stochastic Gradient Methods 52 4.3 Parallelism 54 4.4 Approximate Training 54 4.5 Implementation Concerns 61 5 Related Work and Future Directions 63 5.1 Related Work 63 5.2 Frontier Areas 70 1 Introduction Fundamental to many applications is the ability to predict multiple variables that depend on each other.
    [Show full text]
  • Structured Prediction and Generative Modeling Using Neural Networks
    Universit´ede Montr´eal Structured Prediction and Generative Modeling using Neural Networks par Kyle Kastner D´epartement d'informatique et de recherche op´erationnelle Facult´edes arts et des sciences M´emoire pr´esent´e`ala Facult´edes arts et des sciences en vue de l'obtention du grade de Ma^ıtre `essciences (M.Sc.) en informatique Ao^ut, 2016 ⃝c Kyle Kastner, 2016. Résumé Cette th`esetraite de l'usage des R´eseaux de Neurones pour mod´elisation de donn´ees s´equentielles. La fa¸condont l'information a ´et´eordonn´eeet structur´eeest cruciale pour la plupart des donn´ees.Les mots qui composent ce paragraphe en constituent un exemple. D'autres donn´eesde ce type incluent les donn´eesaudio, visuelles et g´enomiques. La Pr´ediction Structur´eeest l'un des domaines traitant de la mod´elisation de ces donn´ees.Nous allons aussi pr´esenter la Mod´elisation G´en´erative, qui consiste `ag´en´erer des points similaires aux donn´eessur lesquelles le mod`ele a ´et´eentra^ın´e. Dans le chapitre 1, nous utiliserons des donn´eesclients afin d'expliquer les concepts et les outils de l'Apprentissage Automatique, incluant les algorithmes standards d'apprentissage ainsi que les choix de fonction de co^ut et de proc´edure d'optimisation. Nous donnerons ensuite les composantes fondamentales d'un R´e- seau de Neurones. Enfin, nous introduirons des concepts plus complexes tels que le partage de param`etres, les R´eseaux Convolutionnels et les R´eseaux R´ecurrents. Le reste du document, nous d´ecrirons de plusieurs types de R´eseaux de Neurones qui seront `ala fois utiles pour la pr´ediction et la g´en´eration et leur application `ades jeux de donn´eesaudio, d'´ecriture manuelle et d'images Le chapitre 2.2 pr´esentera le R´eseau Neuronal R´ecurrent Variationnel (VRNN pour variational recurrent neural network).
    [Show full text]
  • Deep Value Networks Learn to Evaluate and Iteratively Refine
    Deep Value Networks Learn to Evaluate and Iteratively Refine Structured Outputs Michael Gygli 1 * Mohammad Norouzi 2 Anelia Angelova 2 Abstract complicated high level reasoning to resolve ambiguity. We approach structured output prediction by op- An expressive family of energy-based models studied by timizing a deep value network (DVN) to pre- LeCun et al.(2006) and Belanger & McCallum(2016) ex- cisely estimate the task loss on different out- ploits a neural network to score different joint configura- put configurations for a given input. Once the tions of inputs and outputs. Once the network is trained, model is trained, we perform inference by gra- one simply resorts to gradient-based inference as a mech- dient descent on the continuous relaxations of anism to find low energy outputs. Despite recent develop- the output variables to find outputs with promis- ments, optimizing parameters of deep energy-based models ing scores from the value network. When ap- remains challenging, limiting their applicability. Moving plied to image segmentation, the value network beyond large margin training used by previous work (Be- takes an image and a segmentation mask as in- langer & McCallum, 2016), this paper presents a simpler puts and predicts a scalar estimating the inter- and more effective objective inspired by value based rein- section over union between the input and ground forcement learning for training energy-based models. truth masks. For multi-label classification, the Our key intuition is that learning to critique different out- DVN’s objective is to correctly predict the F1 put configurations is easier than learning to directly come score for any potential label configuration.
    [Show full text]
  • Distributed Training Strategies for the Structured Perceptron
    Distributed Training Strategies for the Structured Perceptron Ryan McDonald Keith Hall Gideon Mann Google, Inc., New York / Zurich {ryanmcd|kbhall|gmann}@google.com Abstract lation (Liang et al., 2006). However, like all struc- tured prediction learning frameworks, the structure Perceptron training is widely applied in the perceptron can still be cumbersome to train. This natural language processing community for is both due to the increasing size of available train- learning complex structured models. Like all ing sets as well as the fact that training complexity structured prediction learning frameworks, the is proportional to inference, which is frequently non- structured perceptron can be costly to train as training complexity is proportional to in- linear in sequence length, even with strong structural ference, which is frequently non-linear in ex- independence assumptions. ample sequence length. In this paper we In this paper we investigate distributed training investigate distributed training strategies for strategies for the structured perceptron as a means the structured perceptron as a means to re- of reducing training times when large computing duce training times when computing clusters clusters are available. Traditional machine learning are available. We look at two strategies and algorithms are typically designed for a single ma- provide convergence bounds for a particu- lar mode of distributed structured perceptron chine, and designing an efficient training mechanism training based on iterative parameter mixing for analogous algorithms on a computing cluster – (or averaging). We present experiments on often via a map-reduce framework (Dean and Ghe- two structured prediction problems – named- mawat, 2004) – is an active area of research (Chu entity recognition and dependency parsing – et al., 2007).
    [Show full text]
  • Structured Perceptron Ye Qiu, Xinghui Lu, Yue Lu, Ruofei Shen
    Structured Perceptron Ye Qiu, Xinghui Lu, Yue Lu, Ruofei Shen 1 Outline 1. Brief review of perceptron 2. Structured Perceptron 3. Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms 4. Distributed Training Strategies for the Structured Perceptron 2 Brief review of perceptron 3 4 Error-Driven Updating: The perceptron algorithm The perceptron is a classic learning algorithm for the neural model of learning. It is one of the most fundamental algorithm. It has two main characteristics: It is online. Instead of considering the entire data set at the same time, it only ever looks at one example. It processes that example and then goes on to the next one. It is error driven. If there is no error, it doesn’t bother updating its parameter. 5 Structured Perceptron 6 Prediction Problems Input x Predict y Prediction Type A book review Is it positive? Binary prediction Oh, man I love this book! Yes (2 choices) This book is so boring... No A tweet Its language Multi-class prediction On the way to the park! English (several choices) 正在去公园 Chinese A sentence Its syntactic tags Structured prediction I read a book. (millions of choices) 7 Applications of Structured Perceptron 1. POS Tagging with HMMs - Collins “Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms” ACL02 2. Parsing - Huang+ “Forest Reranking: Discriminative Parsing with Non-Local Features” ACL08 3. Machine Translation - Liang+ “An End-to-End Discriminative Approach to Machine Translation” ACL06 4. Vocabulary speech recognition - Roark+ “Discriminative Language Modeling with Conditional Random Fields and the Perceptron Algorithm” ACL04 8 Structured perceptron algorithm 9 Feature representation What is feature representation? A feature vector is just a vector that contains information describing an object's important characteristics.
    [Show full text]