Natural Language Processing for Online Applications Text Retrieval, Extraction and Categorization

Total Page:16

File Type:pdf, Size:1020Kb

Natural Language Processing for Online Applications Text Retrieval, Extraction and Categorization Natural Language Processing for Online Applications Natural Language Processing Editor Prof. Ruslan Mitkov School of Humanities, Languages and Social Sciences University of Wolverhampton Stafford St. Wolverhampton WV1 1SB, United Kingdom Email: [email protected] Advisory Board Christian Boitet (University of Grenoble) Jn Carroll (University of Sussex, Brighton) Eugene Charniak (Brown University, Providence) Eduard Hovy (Information Sciences Institute, USC) Richard Kittredge (University of Montreal) Geoffrey Leech (Lancaster University) Carlos Martin-Vide (Rovira i Virgili Un., Tarragona) Andrei Mikheev (University of Edinburgh) Jn Nerbonne (University of Groningen) Nicolas Nicolov (IBM, T.J. Watson Research Center) Kemal Oflazer (Sabanci University) Allan Ramsey (UMIST, Manchester) Monique Rolbert (Université de Marseille) Richard Sproat (AT&T Labs Research, Florham Park) K-Y Su (Baviour Design Corp.) Isabelle Trancoso (INESC, Lisbon) Benjamin Tsou (City University of Hong Kong) Jun-ichi Tsujii (University of Tokyo) Evene Tzoukermann (Bell Laboratories, Murray Hill) Yorick Wilks (University of Sheffield) Volume 5 Natural Language Processing for Online Applications: Text Retrieval, Extraction and Categorization by Peter Jackson and Isabelle Moulinier Natural Language Processing for Online Applications Text Retrieval, Extraction and Categorization Peter Jackson Isabelle Moulinier omson Legal & Regulatory John Benjamins Publishing Company Amsterdam / iladelia TM The paper used in this publication meets the minimum requirements of American 8 National Standard for Information Sciences – Permanence of Paper for Printed Library Materials, ansi z39.48-1984. Library of Congress Cataloging-in-Publication Data Jackson, Peter, 1948- Natural language processing for online applications : text retrieval, extraction, and categorization / Peter Jackson, Isabelle Moulinier. p.cm.(Natural Language Processing, issn 1567–8202 ; v.5) Includes bibliographical references and index. I.Jackson, Peter.II.Moulinier, Isabelle.III.Title.IV.Series. QA76.9.N38 I33 2002 006.3’5--dc21 2002066539 isbn 90 272 49881 (Eur.) / 1 58811 2497 (US) (Hb; alk.paper) isbn 90 272 4989X (Eur.) / 1 58811 2500 (US) (Pb; alk.paper) © 2002 – John Benjamins B.V. No part of this book may be reproduced in any form, by print, photoprint, microfilm, or any other means, without written permission from the publisher. John Benjamins Publishing Co.· P.O.Box 36224 · 1020 me Amsterdam · The Netherlands John Benjamins North America · P.O. Box 27519 · Philadelphia pa 19118-0519 · usa Table of contents Preface C 1 Natural language processing . What is NLP? . NLP and linguistics .. Syntax and semantics .. Pragmatics and context .. Two views of NLP .. Tasks and supertasks . Linguistic tools .. Sentence delimiters and tokenizers .. Stemmers and taggers .. Noun phrase and name recognizers .. Parsers and grammars . Plan of the book C 2 Document retrieval . Information retrieval . Indexing technology . Query processing .. Boolean search .. Ranked retrieval .. Probabilistic retrieval .. Language modeling . Evaluating search engines .. Evaluation studies .. Evaluation metrics .. Relevance judgments .. Total system evaluation . Attempts to enhance search performance Table of contents .. Query expansion and thesauri .. Query expansion from relevance information* . The future of Web searching .. Indexing the Web .. Searching the Web .. Ranking and reranking documents .. The state of online search . Summary of information retrieval C 3 Information extraction . The Message Understanding Conferences . Regular expressions . Finite automata in FASTUS .. Finite State Machines and regular languages .. Finite State Machines as parsers . Pushdown automata and context-free grammars .. Analyzing case reports .. Context free grammars .. Parsing with a pushdown automaton .. Coping with incompleteness and ambiguity . Limitations of current technology and future research .. Explicit versus implicit statements .. Machine learning for information extraction .. Statistical language models for information extraction . Summary of information extraction C 4 Text categorization . Overview of categorization tasks and methods . Handcrafted rule based methods . Inductive learning for text classification .. Naïve Bayes classifiers .. Linear classifiers* .. Decision trees and decision lists . Nearest Neighbor algorithms . Combining classifiers .. Data fusion .. Boosting Table of contents .. Using multiple classifiers . Evaluation of text categorization systems .. Evaluation studies .. Evaluation metrics .. Relevance judgments .. System evaluation C 5 Towards text mining . What is text mining? . Reference and coreference .. Named entity recognition .. The coreference task . Automatic summarization .. Summarization tasks .. Constructing summaries from document fragments .. Multi-document summarization (MDS) . Testing of automatic summarization programs .. Evaluation problems in summarization research .. Building a corpus for training and testing . Prospects for text mining and NLP Index Preface There is no single text on the market that covers the emerging technologies of document retrieval, information extraction, and text categorization in a coher- ent fashion. This book seeks to satisfy a genuine need on the part of technology practitioners in the Internet space, who are faced with having to make difficult decisions as to what research has been done, and what the best practices are. It is not intended as a vendor guide (such things are quickly out of date), or as a recipe for building applications (such recipes are very context-dependent). But it does identify the key technologies, the issues involved, and the strengths and weaknesses of the various approaches. There is also a strong emphasis on evaluation in every chapter, both in terms of methodology (how to evaluate) and what controlled experimentation and industrial experience have to tell us. I was prompted to write this book after spending seven years running an R&D group in an Internet publishing and solutions business. During that time, we were able to put into production a number of systems that either generated revenue or enabled cost savings for the company, leveraging technologies from information retrieval, information extraction, and text categorization. This is not a chronicle of these exploits, but a primer for those who are already inter- ested in natural language processing for online applications. Nevertheless, my treatment of the philosophy and practice of language processing is colored by the context in which I function, namely the arena of commercial exploitation. Thus, although there is a focus on technical detail and research results, I also address some of the issues that arise in applying such systems to data collections of realistic size and complexity. The book is not intended exclusively as an academic text, although I sus- pect that it will be of interest to students who wish to use these technologies in an industrial setting. It is also aimed at software engineers, project man- agers, and technology executives who want or need to understand the technol- ogy at some level. I hope that such people find it useful, and that it provokes ideas, discussion, and action in the field of applied research and development. Each chapter begins with lighter material and then progresses to heavier stuff, with some of the later sections and sidebars being marked with an asterisk as Preface optional. Every attempt has been made to cover more ground, and make the material more accessible, than one would expect from the typical textbook. Thanks are due to Khalid Al-Kofahi, Chris Dozier, David Lewis, Jane Lund, Ruslan Mitkov, and two anonymous referees, for providing insightful com- ments on one or more chapters. I would also like to thank James Allen and David Israel for helping me get started in information extraction back in 1995, and Chris Wolff for backing that project at Thomson Legal Publishing. I thank my colleagues at Thomson Legal & Regulatory for bearing with me while I fin- ished the book, and for providing a stimulating work environment. Finally, I thank my wife Sandy for putting up with me while I devoted long hours to this project. Peter Jackson St Paul, MN C 1 Natural language processing A conversation with Eliza Eliza: how are you today? PJ: not so good Eliza: why do you say not so good? PJ: i suffer from allergies Eliza: that’s interesting PJ: i come out in spots Eliza: i’m not sure i know what you’re talking about PJ: spots on my face Eliza: your face PJ: that’sright Eliza: is it really that right? PJ: you’re a program, aren’tyou? Eliza: does it please you to believe i am a program, am not i? PJ: it sure does Eliza: i see PJ: yup Eliza: a while back you talked about your face PJ: what about your face? Eliza: what is it you really want to know? PJ: are you my kind of girl? Eliza: perhaps in your fantasies i am your kind of girl PJ: in my fantasies you are blonde Eliza: what makes you think i am blonde? PJ: i don’tknow Eliza: don’t you really know? PJ: no Eliza: are you saying no just to be negative? PJ: no Eliza: you are being a bit negative PJ: no i’mnot Eliza: how long have you been not? Chapter 1 The above dialogue took place recently between one of the authors and a com- puter program called Eliza,1 downloaded from the World Wide Web. Eliza was an AI brainchild of the 1960s that employed rather primitive pattern-matching methods to respond to inputs and sustain a conversation
Recommended publications
  • Shakespeare in the Eighteenth Century: Algorithm for Quotation Identification
    University of Arkansas, Fayetteville ScholarWorks@UARK Theses and Dissertations 5-2020 Shakespeare in the Eighteenth Century: Algorithm for Quotation Identification Marion Pauline Chiariglione University of Arkansas, Fayetteville Follow this and additional works at: https://scholarworks.uark.edu/etd Part of the Numerical Analysis and Scientific Computing Commons, and the Theory and Algorithms Commons Citation Chiariglione, M. P. (2020). Shakespeare in the Eighteenth Century: Algorithm for Quotation Identification. Theses and Dissertations Retrieved from https://scholarworks.uark.edu/etd/3580 This Thesis is brought to you for free and open access by ScholarWorks@UARK. It has been accepted for inclusion in Theses and Dissertations by an authorized administrator of ScholarWorks@UARK. For more information, please contact [email protected]. Shakespeare in the Eighteenth Century: Algorithm for Quotation Identification A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Computer Science by Marion Pauline Chiariglione IUT Dijon, University of Burgundy Bachelor of Science in Computer Science, 2017 May 2020 University of Arkansas This thesis is approved for recommendation to the Graduate Council Susan Gauch, Ph.D. Thesis Director Qinghua Li, Ph.D. Committee member Khoa Luu, Ph.D. Committee member Abstract Quoting a borrowed excerpt of text within another literary work was infrequently done prior to the beginning of the eighteenth century. However, quoting other texts, particularly Shakespeare, became quite common after that. Our work develops automatic approaches to identify that trend. Initial work focuses on identifying exact and modified sections of texts taken from works of Shakespeare in novels spanning the eighteenth century. We then introduce a novel approach to identifying modified quotes by adapting the Edit Distance metric, which is character based, to a word based approach.
    [Show full text]
  • Using N-Grams to Understand the Nature of Summaries
    Using N-Grams to Understand the Nature of Summaries Michele Banko and Lucy Vanderwende One Microsoft Way Redmond, WA 98052 {mbanko, lucyv}@microsoft.com views of the event being described over different Abstract documents, or present a high-level view of an event that is not explicitly reflected in any single document. A Although single-document summarization is a useful multi-document summary may also indicate the well-studied task, the nature of multi- presence of new or distinct information contained within document summarization is only beginning to a set of documents describing the same topic (McKeown be studied in detail. While close attention has et. al., 1999, Mani and Bloedorn, 1999). To meet these been paid to what technologies are necessary expectations, a multi-document summary is required to when moving from single to multi-document generalize, condense and merge information coming summarization, the properties of human- from multiple sources. written multi-document summaries have not Although single-document summarization is a well- been quantified. In this paper, we empirically studied task (see Mani and Maybury, 1999 for an characterize human-written summaries overview), multi-document summarization is only provided in a widely used summarization recently being studied closely (Marcu & Gerber 2001). corpus by attempting to answer the questions: While close attention has been paid to multi-document Can multi-document summaries that are summarization technologies (Barzilay et al. 2002, written by humans be characterized as Goldstein et al 2000), the inherent properties of human- extractive or generative? Are multi-document written multi-document summaries have not yet been summaries less extractive than single- quantified.
    [Show full text]
  • Text and Data Mining: Technologies Under Construction
    Text and Data Mining: Technologies Under Construction WHO’S INSIDE Accenture Elsevier Science Europe American Institute of Figshare SciTech Strategies Biological Sciences General Electric SPARC Battelle IBM Sparrho Bristol-Myers Squibb Komatsu Spotfire Clinerion Linguamatics Limited Talix Columbia Pipeline Group MedAware UnitedHealth Group Copyright Clearance Mercedes-Benz Verisk Center, Inc. Meta VisTrails CrossRef Novartis Wellcome Trust Docear OMICtools Copyright Clearance Center, Inc. (CCC) has licensed this report from Outsell, Inc., with the right to distribute it for marketing and market education purposes. CCC did not commission this report, nor is this report a fee-for-hire white paper. Outsell’s fact-based research, analysis and rankings, and all aspects of our opinion were independently derived and CCC had no influence or involvement on the design or findings within this report. For questions, please contact Outsell at [email protected]. Market Advancing the Business of Information Performance January 22, 2016 Table of Contents Why This Topic 3 Methodology 3 How It Works 3 Applications 7 Scientific Research 7 Healthcare 10 Engineering 13 Challenges 15 Implications 17 10 to Watch 18 Essential Actions 21 Imperatives for Information Managers 22 Related Research 24 Table of Contents for Figures & Tables Table 1. Providers of TDM-Related Functions 5 © 2016 Outsell, Inc. 2 Why This Topic Text and data mining (TDM), also referred to as content mining, is a major focus for academia, governments, healthcare, and industry as a way to unleash the potential for previously undiscovered connections among people, places, things, and, for the purpose of this report, scientific, technical, and healthcare information. Although there have been continuous advances in methodologies and technologies, the power of TDM has not yet been fully realized, and technology struggles to keep up with the ever-increasing flow of content.
    [Show full text]
  • Multi-Document Biography Summarization
    Multi-document Biography Summarization Liang Zhou, Miruna Ticrea, Eduard Hovy University of Southern California Information Sciences Institute 4676 Admiralty Way Marina del Rey, CA 90292-6695 {liangz, miruna, hovy} @isi.edu Abstract In this paper we describe a biography summarization system using sentence classification and ideas from information retrieval. Although the individual techniques are not new, assembling and applying them to generate multi-document biographies is new. Our system was evaluated in DUC2004. It is among the top performers in task 5–short summaries focused by person questions. 1 Introduction Automatic text summarization is one form of information management. It is described as selecting a subset of sentences from a document that is in size a small percentage of the original and Figure 1. Overall design of the biography yet is just as informative. Summaries can serve as summarization system. surrogates of the full texts in the context of To determine what and how sentences are Information Retrieval (IR). Summaries are created selected and ranked, a simple IR method and from two types of text sources, a single document experimental classification methods both or a set of documents. Multi-document contributed. The set of top-scoring sentences, after summarization (MDS) is a natural and more redundancy removal, is the resulting biography. elaborative extension of the single-document As yet, the system contains no inter-sentence summarization, and poses additional difficulties on ‘smoothing’ stage. algorithm design. Various kinds of summaries fall In this paper, work in related areas is discussed into two broad categories: generic summaries are in Section 2; a description of our biography corpus the direct derivatives of the source texts; special- used for training and testing the classification interest summaries are generated in response to component is in Section 3; Section 4 explains the queries or topic-oriented questions.
    [Show full text]
  • Natural Language Processing
    Chowdhury, G. (2003) Natural language processing. Annual Review of Information Science and Technology, 37. pp. 51-89. ISSN 0066-4200 http://eprints.cdlr.strath.ac.uk/2611/ This is an author-produced version of a paper published in The Annual Review of Information Science and Technology ISSN 0066-4200 . This version has been peer-reviewed, but does not include the final publisher proof corrections, published layout, or pagination. Strathprints is designed to allow users to access the research output of the University of Strathclyde. Copyright © and Moral Rights for the papers on this site are retained by the individual authors and/or other copyright owners. Users may download and/or print one copy of any article(s) in Strathprints to facilitate their private study or for non-commercial research. You may not engage in further distribution of the material or use it for any profitmaking activities or any commercial gain. You may freely distribute the url (http://eprints.cdlr.strath.ac.uk) of the Strathprints website. Any correspondence concerning this service should be sent to The Strathprints Administrator: [email protected] Natural Language Processing Gobinda G. Chowdhury Dept. of Computer and Information Sciences University of Strathclyde, Glasgow G1 1XH, UK e-mail: [email protected] Introduction Natural Language Processing (NLP) is an area of research and application that explores how computers can be used to understand and manipulate natural language text or speech to do useful things. NLP researchers aim to gather knowledge on how human beings understand and use language so that appropriate tools and techniques can be developed to make computer systems understand and manipulate natural languages to perform the desired tasks.
    [Show full text]
  • 1 Application of Text Mining to Biomedical Knowledge Extraction: Analyzing Clinical Narratives and Medical Literature
    Amy Neustein, S. Sagar Imambi, Mário Rodrigues, António Teixeira and Liliana Ferreira 1 Application of text mining to biomedical knowledge extraction: analyzing clinical narratives and medical literature Abstract: One of the tools that can aid researchers and clinicians in coping with the surfeit of biomedical information is text mining. In this chapter, we explore how text mining is used to perform biomedical knowledge extraction. By describing its main phases, we show how text mining can be used to obtain relevant information from vast online databases of health science literature and patients’ electronic health records. In so doing, we describe the workings of the four phases of biomedical knowledge extraction using text mining (text gathering, text preprocessing, text analysis, and presentation) entailed in retrieval of the sought information with a high accuracy rate. The chapter also includes an in depth analysis of the differences between clinical text found in electronic health records and biomedical text found in online journals, books, and conference papers, as well as a presentation of various text mining tools that have been developed in both university and commercial settings. 1.1 Introduction The corpus of biomedical information is growing very rapidly. New and useful results appear every day in research publications, from journal articles to book chapters to workshop and conference proceedings. Many of these publications are available online through journal citation databases such as Medline – a subset of the PubMed interface that enables access to Medline publications – which is among the largest and most well-known online databases for indexing profes- sional literature. Such databases and their associated search engines contain important research work in the biological and medical domain, including recent findings pertaining to diseases, symptoms, and medications.
    [Show full text]
  • Quesgen Using Nlp 01
    Quesgen Using Nlp 01 International Journal of Latest Trends in Engineering and Technology Vol.(13)Issue(2), pp.009-014 DOI: http://dx.doi.org/10.21172/1.132.02 e-ISSN:2278-621X QUESGEN USING NLP Pawan NGP1, Pooja Bahuguni2, Pooja Dattatri3, Shilpi Kumari4, Vikranth B.M5 Abstract— when people read for long hours, they seldom are able to grasp concepts and it gives them false sense of understanding it. The aim of this project is to tackle this problem by processing given text and generating applicable questions and answer. The steps followed are: 1. Candidate key sentences are selected (using Text Rank). 2. Candidate key words are selected from candidate key sentences (RAKE). 3. These selected key sentences and words are stored in the database (MongoDB) and presented to the user through chatbot interface. Keywords— NLP, NLP toolkit, Sentence extraction, Keyword extraction, ChatBot, RAKE, TextRank 1. INTRODUCTION Humans are the most curious by nature. Asking Questions to meet their never-ending quest for information and knowledge. For Example,teachers ask students, questions to evaluate performance of the students, pupils learn by asking questions to teachers,and even our normal life conversation consists of asking questions. Questions are the major part of countless learning interactions. However, with the advent of technology, attention spans of individuals have significantly gone down and they are not able to ask good questions. It has been noticed that when people try to read for long hours, they seldom are able to grasp concepts. But having spent some time reading gives people a false sense of understanding it.
    [Show full text]
  • An Automatic Text Summarization for Malayalam Using Sentence Extraction
    International Journal of Advanced Computational Engineering and Networking, ISSN: 2320-2106, Volume-3, Issue-8, Aug.-2015 AN AUTOMATIC TEXT SUMMARIZATION FOR MALAYALAM USING SENTENCE EXTRACTION 1RENJITH S R, 2SONY P 1M.Tech Computer and Information Science, Dept.of Computer Science, College of Engineering Cherthala Kerala, India-688541 2Assistant Professor, Dept. of Computer Science, College of Engineering Cherthala, Kerala, India-688541 Abstract—Text Summarization is the process of generating a short summary for the document that contains the significant portion of information. In an automatic text summarization process, a text is given to the computer and the computer returns a shorter less redundant extract of the original text. The proposed method is a sentence extraction based single document text summarization which produces a generic summary for a Malayalam document. Sentences are ranked based on feature scores and Googles PageRank formula. Top k ranked sentences will be included in summary where k depends on the compression ratio between original text and summary. Performance evaluation will be done by comparing the summarization outputs with manual summaries generated by human evaluators. Keywords—Text summarization, Sentence Extraction, Stemming, TF-ISF score, Sentence similarity, PageRank formula, Summary generation. I. INTRODUCTION a summary, which represents the subject matter of an article by understanding the whole meaning, which With enormous growth of information on cyberspace, are generated by reformulating the salient unit conventional Information Retrieval techniques have selected from an input sentences. It may contain some become inefficient for finding relevant information text units which are not present in the input text. An effectively. When we give a keyword to be searched extract is a summary consisting of a number of on the internet, it returns thousands of documents sentences selected from the input text.Sentence overwhelming the user.
    [Show full text]
  • NEXT GENERATION CATALOGUES: an ANALYSIS of USER SEARCH STRATEGIES and BEHAVIOR by FREDRICK KIWUWA LUGYA DISSERTATION Submitted I
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Illinois Digital Environment for Access to Learning and Scholarship Repository NEXT GENERATION CATALOGUES: AN ANALYSIS OF USER SEARCH STRATEGIES AND BEHAVIOR BY FREDRICK KIWUWA LUGYA DISSERTATION Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Library and Information Science in the Graduate College of the University of Illinois at Urbana-Champaign, 2017 Urbana, Illinois Doctoral Committee: Associate Professor Kathryn La Barre, Chair and Director Assistant Professor Nicole A. Cooke Dr. Jennifer Emanuel Taylor, University of Illinois Chicago Associate Professor Carol Tilley ABSTRACT The movement from online catalogues to search and discovery systems has not addressed the goals of true resource discoverability. While catalogue user studies have focused on user search and discovery processes and experiences, and construction and manipulation of search queries, little insight is given to how searchers interact with search features of next generation catalogues. Better understanding of user experiences can help guide informed decisions when selecting and implementing new systems. In this study, fourteen graduate students completed a set of information seeking tasks using UIUC's VuFind installation. Observations of these interactions elicited insight into both search feature use and user understanding of the function of features. Participants used the basic search option for most searches. This is because users understand that basic search draws from a deep index that always gives results regardless of search terms; and because it is convenient, appearing at every level of the search, thus reducing effort and shortening search time.
    [Show full text]
  • Automatic Document Summarization by Sentence Extraction
    Вычислительные технологии Том 12, № 5, 2007 AUTOMATIC DOCUMENT SUMMARIZATION BY SENTENCE EXTRACTION R. M. Aliguliyev Institute of Information Technology of the National Academy of Sciences of Azerbaijan, Baku e-mail: [email protected] Представлен метод автоматического реферирования документов, который ге- нерирует резюме документа путем кластеризации и извлечения предложений из исходного документа. Преимущество предложенного подхода в том, что сгенери- рованное резюме документа может включать основное содержание практически всех тем, представленных в документе. Для определения оптимального числа кла- стеров введен критерий оценки качества кластеризации. Introduction Automatic document processing is a research field that is currently extremely active. One important task in this field is automatic document summarization, which preserves its information content [1]. With a large volume of text documents, presenting the user with a summary of each document greatly facilitates the task of finding the desired documents. Text search and summarization are the two essential technologies that complement each other. Text search engines return a set of documents that seem to be relevant to the user’s query, and text enable quick examinations through the returned documents. In general, automatic document summarization takes a source document (or source documents) as input, extracts the essence of the source(s), and presents a well-formed summary to the user. Mani and Maybury [1] formally defined automatic document summari- zation as the process of distilling the most important information from a source(s) to produce an abridged version for a particular user (or users) and task (or tasks). The process can be decomposed into three phases: analysis, transformation and synthesis. The analysis phase analyzes the input document and selects a few salient features.
    [Show full text]
  • The Elements of Automatic Summarization
    The Elements of Automatic Summarization by Daniel Jacob Gillick A dissertation submitted in partial satisfaction of the requirements for the degree of in Computer Science in the GRADUATE DIVISION of the UNIVERSITY OF CALIFORNIA, BERKELEY Committee in charge: Professor Nelson Morgan, Chair Professor Daniel Klein Professor Thomas Griffiths Spring 2011 The Elements of Automatic Summarization Copyright © 2011 by Daniel Jacob Gillick Abstract The Elements of Automatic Summarization by Daniel Jacob Gillick Doctor of Philosophy in Computer Science University of California, Berkeley Professor Nelson Morgan, Chair This thesis is about automatic summarization, with experimental results on multi- document news topics: how to choose a series of sentences that best represents a col- lection of articles about one topic. I describe prior work and my own improvements on each component of a summarization system, including preprocessing, sentence valuation, sentence selection and compression, sentence ordering, and evaluation of summaries. The centerpiece of this work is an objective function for summariza- tion that I call "maximum coverage". The intuition is that a good summary covers as many possible important facts or concepts in the original documents. It turns out that this objective, while computationally intractable in general, can be solved efficiently for medium-sized problems and has reasonably good fast approximate so- lutions. Most importantly, the use of an objective function marks a departure from previous algorithmic approaches to summarization. 1 Acknowledgements Getting a Ph.D. is hard. Not really hard in the day-to-day sense, but more because I spent a lot of time working on a few small problems, and at the end of six years, I have only made a few small contributions to the world.
    [Show full text]
  • A Framework for Evaluating the Retrieval Effectiveness of Search Engines Dirk Lewandowski Hamburg University of Applied Sciences, Germany
    1 A Framework for Evaluating the Retrieval Effectiveness of Search Engines Dirk Lewandowski Hamburg University of Applied Sciences, Germany This is a preprint of a book chapter to be published in: Jouis, Christophe: Next Generation Search Engine: Advanced Models for Information Retrieval. Hershey, PA: IGI Global, 2012 http://www.igi-global.com/book/next-generation-search-engines/59723 ABSTRACT This chapter presents a theoretical framework for evaluating next generation search engines. We focus on search engines whose results presentation is enriched with additional information and does not merely present the usual list of “10 blue links”, that is, of ten links to results, accompanied by a short description. While Web search is used as an example here, the framework can easily be applied to search engines in any other area. The framework not only addresses the results presentation, but also takes into account an extension of the general design of retrieval effectiveness tests. The chapter examines the ways in which this design might influence the results of such studies and how a reliable test is best designed. INTRODUCTION Information retrieval systems in general and specific search engines need to be evaluated during the development process, as well as when the system is running. A main objective of the evaluations is to improve the quality of the search results, although other reasons for evaluating search engines do exist (see Lewandowski & Höchstötter, 2008). A variety of quality factors can be applied to search engines. These can be grouped into four major areas (Lewandowski & Höchstötter, 2008): • Index Quality: This area of quality measurement indicates the important role that search engines’ databases play in retrieving relevant and comprehensive results.
    [Show full text]