An Introduction to Conditional Random Fields Full Text Available At

An Introduction to Conditional Random Fields Full Text Available At

Full text available at: http://dx.doi.org/10.1561/2200000013 An Introduction to Conditional Random Fields Full text available at: http://dx.doi.org/10.1561/2200000013 An Introduction to Conditional Random Fields Charles Sutton University of Edinburgh Edinburgh, EH8 9AB UK [email protected] Andrew McCallum University of Massachusetts Amherst, MA 01003 USA [email protected] Boston { Delft Full text available at: http://dx.doi.org/10.1561/2200000013 Foundations and Trends R in Machine Learning Published, sold and distributed by: now Publishers Inc. PO Box 1024 Hanover, MA 02339 USA Tel. +1-781-985-4510 www.nowpublishers.com [email protected] Outside North America: now Publishers Inc. PO Box 179 2600 AD Delft The Netherlands Tel. +31-6-51115274 The preferred citation for this publication is C. Sutton and A. McCallum, An Intro- duction to Conditional Random Fields, Foundation and Trends R in Machine Learn- ing, vol 4, no 4, pp 267{373, 2011. ISBN: 978-1-60198-572-9 c 2012 C. Sutton and A. McCallum All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, mechanical, photocopying, recording or otherwise, without prior written permission of the publishers. Photocopying. In the USA: This journal is registered at the Copyright Clearance Cen- ter, Inc., 222 Rosewood Drive, Danvers, MA 01923. Authorization to photocopy items for internal or personal use, or the internal or personal use of specific clients, is granted by now Publishers Inc. for users registered with the Copyright Clearance Center (CCC). The `services' for users can be found on the internet at: www.copyright.com For those organizations that have been granted a photocopy license, a separate system of payment has been arranged. Authorization does not extend to other kinds of copy- ing, such as that for general distribution, for advertising or promotional purposes, for creating new collective works, or for resale. In the rest of the world: Permission to pho- tocopy must be obtained from the copyright owner. Please apply to now Publishers Inc., PO Box 1024, Hanover, MA 02339, USA; Tel. +1-781-871-0245; www.nowpublishers.com; [email protected] now Publishers Inc. has an exclusive license to publish this material worldwide. Permission to use this content must be obtained from the copyright license holder. Please apply to now Publishers, PO Box 179, 2600 AD Delft, The Netherlands, www.nowpublishers.com; e-mail: [email protected] Full text available at: http://dx.doi.org/10.1561/2200000013 Foundations and Trends R in Machine Learning Volume 4 Issue 4, 2011 Editorial Board Editor-in-Chief: Michael Jordan Department of Electrical Engineering and Computer Science Department of Statistics University of California, Berkeley Berkeley, CA 94720-1776 Editors Peter Bartlett (UC Berkeley) John Lafferty (Carnegie Mellon University) Yoshua Bengio (Universit´ede Montr´eal) Michael Littman (Rutgers University) Avrim Blum (Carnegie Mellon University) Gabor Lugosi (Pompeu Fabra University) Craig Boutilier (University of Toronto) David Madigan (Columbia University) Stephen Boyd (Stanford University) Pascal Massart (Universit´ede Paris-Sud) Carla Brodley (Tufts University) Andrew McCallum (University of Inderjit Dhillon (University of Texas at Massachusetts Amherst) Austin) Marina Meila (University of Washington) Jerome Friedman (Stanford University) Andrew Moore (Carnegie Mellon Kenji Fukumizu (Institute of Statistical University) Mathematics) John Platt (Microsoft Research) Zoubin Ghahramani (Cambridge Luc de Raedt (Albert-Ludwigs Universitaet University) Freiburg) David Heckerman (Microsoft Research) Christian Robert (Universit´e Tom Heskes (Radboud University Nijmegen) Paris-Dauphine) Geoffrey Hinton (University of Toronto) Sunita Sarawagi (IIT Bombay) Aapo Hyvarinen (Helsinki Institute for Robert Schapire (Princeton University) Information Technology) Bernhard Schoelkopf (Max Planck Institute) Leslie Pack Kaelbling (MIT) Richard Sutton (University of Alberta) Michael Kearns (University of Larry Wasserman (Carnegie Mellon Pennsylvania) University) Daphne Koller (Stanford University) Bin Yu (UC Berkeley) Full text available at: http://dx.doi.org/10.1561/2200000013 Editorial Scope Foundations and Trends R in Machine Learning will publish sur- vey and tutorial articles in the following topics: • Adaptive control and signal • Inductive logic programming processing • Kernel methods • Applications and case studies • Markov chain Monte Carlo • Behavioral, cognitive and neural • Model choice learning • Nonparametric methods • Bayesian learning • Online learning • Classification and prediction • Optimization • Clustering • Reinforcement learning • Data mining • Relational learning • Dimensionality reduction • Robustness • Evaluation • Spectral methods • Game theoretic learning • Statistical learning theory • Graphical models • Variational inference • Independent component analysis • Visualization Information for Librarians Foundations and Trends R in Machine Learning, 2011, Volume 4, 4 issues. ISSN paper version 1935-8237. ISSN online version 1935-8245. Also available as a combined paper and online subscription. Full text available at: http://dx.doi.org/10.1561/2200000013 Foundations and Trends R in Machine Learning Vol. 4, No. 4 (2011) 267{373 c 2012 C. Sutton and A. McCallum DOI: 10.1561/2200000013 An Introduction to Conditional Random Fields Charles Sutton1 and Andrew McCallum2 1 School of Informatics, University of Edinburgh, Edinburgh, EH8 9AB, UK, [email protected] 2 Department of Computer Science, University of Massachusetts, Amherst, MA, 01003, USA, [email protected] Abstract Many tasks involve predicting a large number of variables that depend on each other as well as on other observed variables. Structured prediction methods are essentially a combination of classification and graphical modeling. They combine the ability of graphical models to compactly model multivariate data with the ability of classifica- tion methods to perform prediction using large sets of input features. This survey describes conditional random fields, a popular probabilistic method for structured prediction. CRFs have seen wide application in many areas, including natural language processing, computer vision, and bioinformatics. We describe methods for inference and parame- ter estimation for CRFs, including practical issues for implementing large-scale CRFs. We do not assume previous knowledge of graphical modeling, so this survey is intended to be useful to practitioners in a wide variety of fields. Full text available at: http://dx.doi.org/10.1561/2200000013 Contents 1 Introduction 1 1.1 Implementation Details 4 2 Modeling 5 2.1 Graphical Modeling 5 2.2 Generative versus Discriminative Models 11 2.3 Linear-chain CRFs 19 2.4 General CRFs 23 2.5 Feature Engineering 26 2.6 Examples 31 2.7 Applications of CRFs 39 2.8 Notes on Terminology 41 3 Overview of Algorithms 43 4 Inference 47 4.1 Linear-Chain CRFs 48 4.2 Inference in Graphical Models 52 4.3 Implementation Concerns 62 5 Parameter Estimation 65 5.1 Maximum Likelihood 66 ix Full text available at: http://dx.doi.org/10.1561/2200000013 5.2 Stochastic Gradient Methods 75 5.3 Parallelism 77 5.4 Approximate Training 77 5.5 Implementation Concerns 84 6 Related Work and Future Directions 87 6.1 Related Work 87 6.2 Frontier Areas 94 Acknowledgments 97 References 99 Full text available at: http://dx.doi.org/10.1561/2200000013 1 Introduction Fundamental to many applications is the ability to predict multiple variables that depend on each other. Such applications are as diverse as classifying regions of an image [49, 61, 69], estimating the score in a game of Go [130], segmenting genes in a strand of DNA [7], and syn- tactic parsing of natural-language text [144]. In such applications, we wish to predict an output vector y = fy0;y1;:::;yT g of random vari- ables given an observed feature vector x. A relatively simple example from natural-language processing is part-of-speech tagging, in which each variable ys is the part-of-speech tag of the word at position s, and the input x is divided into feature vectors fx0;x1;:::;xT g. Each xs contains various information about the word at position s, such as its identity, orthographic features such as prefixes and suffixes, member- ship in domain-specific lexicons, and information in semantic databases such as WordNet. One approach to this multivariate prediction problem, especially if our goal is to maximize the number of labels ys that are correctly classified, is to learn an independent per-position classifier that maps x 7! ys for each s. The difficulty, however, is that the output variables have complex dependencies. For example, in English adjectives do not 1 Full text available at: http://dx.doi.org/10.1561/2200000013 2 Introduction usually follow nouns, and in computer vision, neighboring regions in an image tend to have similar labels. Another difficulty is that the output variables may represent a complex structure such as a parse tree, in which a choice of what grammar rule to use near the top of the tree can have a large effect on the rest of the tree. A natural way to represent the manner in which output variables depend on each other is provided by graphical models. Graphical models | which include such diverse model families as Bayesian net- works, neural networks, factor graphs, Markov random fields, Ising models, and others | represent a complex distribution over many vari- ables as a product of local factors on smaller

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    23 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us