Machine Reading at the University of Washington

Machine Reading at the University of Washington

Machine Reading at the University of Washington Hoifung Poon, Janara Christensen, Pedro Domingos, Oren Etzioni, Raphael Hoffmann, Chloe Kiddon, Thomas Lin, Xiao Ling, Mausam, Alan Ritter, Stefan Schoenmackers, Stephen Soderland, Dan Weld, Fei Wu, Congle Zhang Department of Computer Science & Engineering University of Washington Seattle, WA 98195 {hoifung,janara,pedrod,etzioni,raphaelh,chloe,tlin,xiaoling,mausam, aritter,stef,soderland,weld,wufei,clzhang }@cs.washington.edu Abstract machine learning approaches (e.g., components in the traditional NLP pipeline such as POS tagging Machine reading is a long-standing goal of AI and NLP. In recent years, tremendous progress and syntactic parsing). However, end-to-end solu- has been made in developing machine learning tions are still rare, and existing systems typically re- approaches for many of its subtasks such as quire substantial amount of human effort in manual parsing, information extraction, and question engineering and/or labeling examples. As a result, answering. However, existing end-to-end so- they often target restricted domains and only extract lutions typically require substantial amount of limited types of knowledge (e.g., a pre-specified re- human efforts (e.g., labeled data and/or man- lation). Moreover, many machine reading systems ual engineering), and are not well poised for Web-scale knowledge acquisition. In this pa- train their knowledge extractors once and do not per, we propose a unifying approach for ma- leverage further learning opportunities such as ad- chine reading by bootstrapping from the easi- ditional text and interaction with end users. est extractable knowledge and conquering the Ideally, a machine reading system should strive to long tail via a self-supervised learning pro- satisfy the following desiderata: cess. This self-supervision is powered by joint End-to-end: the system should input raw text, ex- inference based on Markov logic, and is made scalable by leveraging hierarchical structures tract knowledge, and be able to answer ques- and coarse-to-fine inference. Researchers at tions and support other end tasks; the University of Washington have taken the High quality: the system should extract knowledge first steps in this direction. Our existing work with high accuracy; explores the wide spectrum of this vision and Large-scale: the system should acquire knowledge shows its promise. at Web-scale and be open to arbitrary domains, genres, and languages; 1 Introduction Maximally autonomous: the system should incur Machine reading, or learning by reading, aims to minimal human effort; extract knowledge automatically from unstructured Continuous learning from experience: the text and apply the extracted knowledge to end tasks system should constantly integrate new infor- such as decision making and question answering. It mation sources (e.g., new text documents) and has been a major goal of AI and NLP since their learn from user questions and feedback (e.g., early days. With the advent of the Web, the billions via performing end tasks) to continuously of online text documents contain virtually unlimited improve its performance. amount of knowledge to extract, further increasing the importance and urgency of machine reading. These desiderata raise many intriguing and chal- In the past, there has been a lot of progress in lenging research questions. Machine reading re- automating many subtasks of machine reading by search at the University of Washington has explored 87 Proceedings of the NAACL HLT 2010 First International Workshop on Formalisms and Methodology for Learning by Reading, pages 87–95, Los Angeles, California, June 2010. c 2010 Association for Computational Linguistics a wide spectrum of solutions to these challenges and has produced a large number of initial systems that demonstrated promising performance. During this expedition, an underlying unifying vision starts to emerge. It becomes apparent that the key to solving machine reading is to: 1. Conquer the long tail of textual knowledge via a self-supervised learning process that lever- ages data redundancy to bootstrap from the head and propagates information down the long tail by joint inference; 2. Scale this process to billions of Web documents by identifying and leveraging ubiquitous struc- tures that lead to sparsity. Figure 1: A unifying vision for machine reading: boot- strap from the head regime of the power-law distribu- In Section 2, we present this vision in detail, iden- tion of textual knowledge, and conquer the long tail in tify the major dimensions these initial systems have a self-supervised learning process that raises certainty on explored, and propose a unifying approach that sat- sparse extractions by propagating information via joint isfies all five desiderata. In Section 3, we reivew inference from frequent extractions. machine reading research at the University of Wash- ington and show how they form synergistic effort A key source of indirect supervision is meta towards solving the machine reading problem. We knowledge about the domains. For example, the conclude in Section 4. TextRunner system (Banko et al., 2007) hinges on the observation that there exist general, relation- 2 A Unifying Approach for Machine independent patterns for information extraction. An- Reading other key source of indirect supervision is data re- The core challenges to machine reading stem from dundancy. While a rare extracted fact or inference the massive scale of the Web and the long-tailed dis- pattern may arise by chance of error, it is much less tribution of textual knowledge . The heterogeneous likely so for the ones with many repetitions (Downey Web contains texts that vary substantially in subject et al., 2010). Such highly-redundant knowledge can matters (e.g., finance vs. biology) and writing styles be extracted easily and with high confidence, and (e.g., blog posts vs. scientific papers). In addition, can be leveraged for bootstrapping. For knowledge natural languages are famous for their myraid vari- that resides in the long tail, explicit forms of redun- ations in expressing the same meaning. A fact may dancy (e.g., identical expressions) are rare, but this be stated in a straightforward way such as “kale con- can be circumvented by joint inference. For exam- tains calcium”. More often though, it may be stated ple, expressions that are composed with or by sim- in a syntactically and/or lexically different way than ilar expressions probably have the same meaning; as phrased in an end task (e.g., “calcium is found in the fact that kale prevents osteoporosis can be de- kale”). Finally, many facts are not even stated ex- rived by combining the facts that kale contains cal- plicitly, and must be inferred from other facts (e.g., cium and that calcium helps prevent osteoporosis via “kale prevents osteoporosis” may not be stated ex- a transitivity-through inference pattern. In general, plicitly but can be inferred by combining facts such joint inference can take various forms, ranging from as “kale contains calcium” and “calcium helps pre- simple voting to shrinkage in a probabilistic ontol- vent osteoporosis”). As a result, machine reading ogy to sophisticated probabilistic reasoning based must not rely on explicit supervision such as manual on a joint model. Simple ones tend to scale bet- rules and labeled examples, which will incur pro- ter, but their capability in propagating information hibitive cost in the Web scale. Instead, it must be is limited. More sophisticated methods can uncover able to learn from indirect supervision. implicit redundancy and propagate much more in- 88 formation with higher quality, yet the challenge is learning, with labeled examples supplanted by indi- how to make them scale as well as simple ones. rect supervision. To do machine reading, a self-supervised learning Another distinctive feature is that we propose process, informed by meta knowledege, stipulates to use coarse-to-fine inference (Felzenszwalb and what form of joint inference to use and how. Effec- McAllester, 2007; Petrov, 2009) as a unifying tively, it increases certainty on sparse extractions by framework to scale inference to the Web. Es- propagating information from more frequent ones. sentially, coarse-to-fine inference leverages the Figure 1 illustrates this unifying vision. sparsity imposed by hierarchical structures that In the past, machine reading research at the Uni- are ubiquitous in human knowledge (e.g., tax- versity of Washington has explored a variety of so- onomies/ontologies). At coarse levels (top levels in lutions that span the key dimensions of this uni- a hierarchy), ambiguities are rare (there are few ob- fying vision: knowledge representation, bootstrap- jects and relations), and inference can be conducted ping, self-supervised learning, large-scale joint in- very efficiently. The result is then used to prune un- ference, ontology induction, continuous learning . promising refinements at the next level. This process See Section 3 for more details. Based on this ex- continues down the hierarchy until decision can be perience, one direction seems particularly promising made. In this way, inference can potentially be sped that we would propose here as our unifying approach up exponentially, analogous to binary tree search. for end-to-end machine reading: Finally, we propose a novel form of continuous Markov logic is used as the unifying framework for learning by leveraging the interaction between the knowledge representation and joint inference; system and end users to constantly improve the per- Self-supervised learning is governed by a joint formance. This is straightforward to do in our ap- probabilistic model that incorporates a small proach given the self-supervision process and the amount of heuristic knowledge and large-scale availability of powerful joint inference. Essentially, relational structures to maximize the amount when the system output is applied to an end task and quality of information to propagate; (e.g., answering questions), the feedback from user is collected and incorporated back into the system Joint inference is made scalable to the Web by as a bootstrap source.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us