Lao Named Entity Recognition Based on Conditional Random Fields with Simple Heuristic Information

Total Page:16

File Type:pdf, Size:1020Kb

Lao Named Entity Recognition Based on Conditional Random Fields with Simple Heuristic Information 2015 12th International Conference on Fuzzy Systems and Knowledge Discovery (FSKD'15) Lao Named Entity Recognition based on Conditional Random Fields with Simple Heuristic Information Mengjie YANG1,2, Lanjiang ZHOU1,2,*, Zhengtao YU1,2 , Shengxiang GAO1,2 , Jianyi GUO1,2 1 School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China 2 Key Laboratory of Intelligent Information Processing, Kunming University of Science and Technology, Kunming 650500, China Abstract—According to characteristics of Lao named entities, the E.Fersini et al. [9] describe how the discovery of semantic paper proposes an approach of Lao Named Entity Recognition information can be viewed as an optimization problem. The (NER) based on Conditional Random Fields (CRFs) with problem aimed at assigning a sequence of labels to a set of knowledge information. Firstly, we segment the text into word interdependent variables, and dependencies among variables 1 sequence and design three labels BIO for personal name and are efficiently modeled through Conditional Random Fields location name entity recognition. Secondly, some named entity (CRFs). E. Fersini and E. Messina [10] address the problem of features of Lao Language are selected for Conditional Random extracting structured information from the transcriptions Fields (CRFs) model, such as the clue word feature, the predicate generated automatically using an Automatic Speech feature etc.. Then, candidate named entities are recognized. Recognition (ASR) system, by integrating Conditional Thirdly, we extract simple personal name and location name Random Fields (CRFs) with available background information. features of Lao Language to build heuristic information, and use the heuristic information to determine candidate named entities. M. Chang, L. Ratinov et al. [11] use the constraint way which Finally, named entities which have not been discovered by relates to the introduction of constrains during the inference Conditional Random Fields (CRFs) model are further recognized phase for preserving the necessary relationships over the by using the named entities word list, and these final named output prediction. Conditional Random Fields (CRFs) with entities are obtained. The experimental results show that the constraint is able to capture some more complex relationships method proposed is effective, and it can improve the effect of among output variables to address Named Entity Recognition named entity recognition by using machine learning method with (NER) problems. For example, Dan Roth [12] presents a new heuristic information. inference procedure based on integer linear programming inference (ILP) and extends Conditional Random Fields (CRFs) Keywords-Lao; Named Entity Recognition; Conditional models to efficiently support general constraint structures. The Random Fields; Rules; Entity Feature method has achieved a good effect on the semantic role labeling. The final method is a hybrid method in combination I. INTRODUCTION with rules and machine learning [13][14][15]. Due to the limitation of the rule-based method and the machine learning Named Entity Recognition (NER) is very important in method, nowadays, there are many scholars start to improve the many Natural Language Processing (NLP) tasks, such as effect of Named Entity Recognition (NER) by combining the Machine Translation (MT) [1], Cross-Language Information ruse-based with the machine learning method, which need to Retrieval (CLIR), Information Extraction (IE) [2] and Parsing, realize fleetly the Named Entity Recognition (NER) by etc. At present, the informationization level is low in Lao, and analyzing the features of Lao language and using the method of the work in lexical, syntactic and semantic analysis is rare, so machine learning. Lao Named Entity recognition plays a very important role in promoting the machine understanding and machine translation Compared with English Named Entity Recognition (NER), of Lao language. Lao and Chinese Named Entities are quite similar, such as they There are a lot of studies on Named Entity Recognition both do not have special features like capitalization to help (NER) in many languages especially English, Chinese and identify named entities. Moreover, in the sentence, there are not Thai, etc, but the research is still very weak in Lao languages. spaces to delimit the word. And the order of Subject, predicate The recognition methods are divided into three classes: the first and object is the same, for example, ທ່ານ (Mr.) ຫຍູ (yu) class is the rule-based approach [3], the method is studied mainly by studying a large number of domain-specific corpus, ໄຊກິງ (Zaijing) ເປັນ (is)ນັກຂຽນ (writer). Of course, it analyzing the characteristic of the language, according to the has its own characteristics, for example, rear attributive, component elements of scale-limited of its named entity and ປະຊາຊົນ (People)ຈີນ (Chinese). If the personal name is stable forming mode to set the rules. And the recognition of named entity is realized by using methods such as the rule the Lao local name, the first name is in front, the last name is in matching, etc. The second is the approach of machine learning back, for example, ສກໃຈ (Soukjai) ລດຕະນະ (Ladtana), [4][5][6][7][8], the method is used to realize the recognition of otherwise, lastname is in front, the first name is in back, for named entity by fusing some characteristics into machine learning model such as Conditional Random Fields (CRFs), example, ໂຈ່ວ (Zhou) ເອີ່ນລາຍ (Enlai). In the sentence and using methods of statistical learning. For example, of Lao language, the adverbial is generally at last, for example, *Corresponding Author is Lanjiang Zhou ([email protected]) 1. BIO: B expresses the first word of entity. I expresses that the word is part of, but not first in the segments. O expresses the irrespective word. 978-1-4673-7681-5 ©2015 IEEE 1460 Word(2) The secondright of current word ຫມູ່ເພື່ອນ (friends) ຂອງຂ້າພະເຈົ້າ (my)ສຶກສາ (learn)ໃນ (in) ກຸງ (city) ປັກກິ່ງ (Beijing). Word(-1) The firstleft of current word The front of general location name of Lao has the special word Word(-2) The secondleft of current word to be distinguished, for example, ແຂວງ (Province) POS(0) Prat of speech (POS) of current word ຫຼວງພະບາງ (Luang Prabang). The personal names of Lao have the difference of positive and negative, the personal name POS(-1) The firstleft POS of current word expresses young man if the “ທ້າວ (Tao)” is added in the POS(-2) The secondleft POS of current word front of the name, and the personal name express young TABLE I is the generic feature template for recognizing woman if the “ນາງ (Niang)” is added in the front of the name. named entities of Lao Language. In order to increase the For example, ທ້າວ (Tao) ຄາໍາແພງ (Kanpen), ນາງ context information description, the following template needs the four position (-2, -1, 1, 2) and so on. The generic feature (Niang)ມະນີ (Mani). The personal name is represented as template describes the current word, the morphology and the man if the front of name have ທ່ານ (Mr.), the person name is part of speech of several words of the context of Lao Language, expresses the limited Lao context information. In many cases, represented as woman if the front of name have ນາງ (Mrs.), etc the simple feature of morphology and the part of speech can [16][17]. So the paper adopts the research achievement of not fully describe the complex phenomenon in language, so we Named Entity Recognition such as Chinese, combines with need to dig the feature description template which is more special advantages of the rule-based method and the statistical- suitable for the inherent law of language. based method, fusions the inherent characteristic of Lao named entity, and adopts methods of combining rules and statistics to B. The define of composite template study the technology of Lao Named Entity Recognition (NER). Because Conditional Random Fields is a logarithmic linear model, we can combine characteristics of the Generic template, The paper will be organized as follows. The section 2 constitute complex, nonlinear characteristics. The composite describes the Lao Named Entity Recognition method based on template can be used in long-range dependence and the rich Conditional Random Fields (CRFs). The section 3 introduces context information. The composite template is defined as the the conditional random field models. The section 4 is Heuristic TABLE II. information. The section 5 describes the experiments and results. The Conclusion is in the Section 6. TABLE II. COMPOSITE TEMPLATE Template Template describe II. LAO NAMED ENTITY RECOGNITION METHOD BASED ON CONDITIONAL RANDOM FIELDS Word(0)/ POS(0) The current word and current word POS For the recognition of personal name and location name of Lao Language, because of the complexity and specificity of its Word(1)/ POS(1) The firstright of current word and POS features, it is almost impossible to develop the rules which can contain most of the entity under limited resources environment Word(2)/ POS(2) The secondright of current word and POS if Named Entity Recognition (NER) is realized by only considering rule-based method, so we can use statistical-based Word(-1)/ POS(-1) The firstleft of current word and POS method to recognize the type of entities, use the learning algorithm of Conditional Random Fields (CRFs), combine with Word(-2)/ POS(-2) The secondleft of current word and POS the form, the part of speech, the Lao named entity internal structural features and rich context to obtain personal and Word(0)/POS(1) The current word and the firstright POS of current word location name recognition model through the manual Word(0)/POS(2) The current word and the secondright POS of annotation corpus study. current word Word(0)/POS(-1) The current word and the firstleft POS of current A. The define of generic feature template word The feature template needs to be defined by human if we Word(0)/POS(-2) The current word and the secondleft POS of current adopt conditional random fields.
Recommended publications
  • Text and Data Mining: Technologies Under Construction
    Text and Data Mining: Technologies Under Construction WHO’S INSIDE Accenture Elsevier Science Europe American Institute of Figshare SciTech Strategies Biological Sciences General Electric SPARC Battelle IBM Sparrho Bristol-Myers Squibb Komatsu Spotfire Clinerion Linguamatics Limited Talix Columbia Pipeline Group MedAware UnitedHealth Group Copyright Clearance Mercedes-Benz Verisk Center, Inc. Meta VisTrails CrossRef Novartis Wellcome Trust Docear OMICtools Copyright Clearance Center, Inc. (CCC) has licensed this report from Outsell, Inc., with the right to distribute it for marketing and market education purposes. CCC did not commission this report, nor is this report a fee-for-hire white paper. Outsell’s fact-based research, analysis and rankings, and all aspects of our opinion were independently derived and CCC had no influence or involvement on the design or findings within this report. For questions, please contact Outsell at [email protected]. Market Advancing the Business of Information Performance January 22, 2016 Table of Contents Why This Topic 3 Methodology 3 How It Works 3 Applications 7 Scientific Research 7 Healthcare 10 Engineering 13 Challenges 15 Implications 17 10 to Watch 18 Essential Actions 21 Imperatives for Information Managers 22 Related Research 24 Table of Contents for Figures & Tables Table 1. Providers of TDM-Related Functions 5 © 2016 Outsell, Inc. 2 Why This Topic Text and data mining (TDM), also referred to as content mining, is a major focus for academia, governments, healthcare, and industry as a way to unleash the potential for previously undiscovered connections among people, places, things, and, for the purpose of this report, scientific, technical, and healthcare information. Although there have been continuous advances in methodologies and technologies, the power of TDM has not yet been fully realized, and technology struggles to keep up with the ever-increasing flow of content.
    [Show full text]
  • Natural Language Processing
    Chowdhury, G. (2003) Natural language processing. Annual Review of Information Science and Technology, 37. pp. 51-89. ISSN 0066-4200 http://eprints.cdlr.strath.ac.uk/2611/ This is an author-produced version of a paper published in The Annual Review of Information Science and Technology ISSN 0066-4200 . This version has been peer-reviewed, but does not include the final publisher proof corrections, published layout, or pagination. Strathprints is designed to allow users to access the research output of the University of Strathclyde. Copyright © and Moral Rights for the papers on this site are retained by the individual authors and/or other copyright owners. Users may download and/or print one copy of any article(s) in Strathprints to facilitate their private study or for non-commercial research. You may not engage in further distribution of the material or use it for any profitmaking activities or any commercial gain. You may freely distribute the url (http://eprints.cdlr.strath.ac.uk) of the Strathprints website. Any correspondence concerning this service should be sent to The Strathprints Administrator: [email protected] Natural Language Processing Gobinda G. Chowdhury Dept. of Computer and Information Sciences University of Strathclyde, Glasgow G1 1XH, UK e-mail: [email protected] Introduction Natural Language Processing (NLP) is an area of research and application that explores how computers can be used to understand and manipulate natural language text or speech to do useful things. NLP researchers aim to gather knowledge on how human beings understand and use language so that appropriate tools and techniques can be developed to make computer systems understand and manipulate natural languages to perform the desired tasks.
    [Show full text]
  • 1 Application of Text Mining to Biomedical Knowledge Extraction: Analyzing Clinical Narratives and Medical Literature
    Amy Neustein, S. Sagar Imambi, Mário Rodrigues, António Teixeira and Liliana Ferreira 1 Application of text mining to biomedical knowledge extraction: analyzing clinical narratives and medical literature Abstract: One of the tools that can aid researchers and clinicians in coping with the surfeit of biomedical information is text mining. In this chapter, we explore how text mining is used to perform biomedical knowledge extraction. By describing its main phases, we show how text mining can be used to obtain relevant information from vast online databases of health science literature and patients’ electronic health records. In so doing, we describe the workings of the four phases of biomedical knowledge extraction using text mining (text gathering, text preprocessing, text analysis, and presentation) entailed in retrieval of the sought information with a high accuracy rate. The chapter also includes an in depth analysis of the differences between clinical text found in electronic health records and biomedical text found in online journals, books, and conference papers, as well as a presentation of various text mining tools that have been developed in both university and commercial settings. 1.1 Introduction The corpus of biomedical information is growing very rapidly. New and useful results appear every day in research publications, from journal articles to book chapters to workshop and conference proceedings. Many of these publications are available online through journal citation databases such as Medline – a subset of the PubMed interface that enables access to Medline publications – which is among the largest and most well-known online databases for indexing profes- sional literature. Such databases and their associated search engines contain important research work in the biological and medical domain, including recent findings pertaining to diseases, symptoms, and medications.
    [Show full text]
  • Natural Language Processing for Online Applications Text Retrieval, Extraction and Categorization
    Natural Language Processing for Online Applications Natural Language Processing Editor Prof. Ruslan Mitkov School of Humanities, Languages and Social Sciences University of Wolverhampton Stafford St. Wolverhampton WV1 1SB, United Kingdom Email: [email protected] Advisory Board Christian Boitet (University of Grenoble) Jn Carroll (University of Sussex, Brighton) Eugene Charniak (Brown University, Providence) Eduard Hovy (Information Sciences Institute, USC) Richard Kittredge (University of Montreal) Geoffrey Leech (Lancaster University) Carlos Martin-Vide (Rovira i Virgili Un., Tarragona) Andrei Mikheev (University of Edinburgh) Jn Nerbonne (University of Groningen) Nicolas Nicolov (IBM, T.J. Watson Research Center) Kemal Oflazer (Sabanci University) Allan Ramsey (UMIST, Manchester) Monique Rolbert (Université de Marseille) Richard Sproat (AT&T Labs Research, Florham Park) K-Y Su (Baviour Design Corp.) Isabelle Trancoso (INESC, Lisbon) Benjamin Tsou (City University of Hong Kong) Jun-ichi Tsujii (University of Tokyo) Evene Tzoukermann (Bell Laboratories, Murray Hill) Yorick Wilks (University of Sheffield) Volume 5 Natural Language Processing for Online Applications: Text Retrieval, Extraction and Categorization by Peter Jackson and Isabelle Moulinier Natural Language Processing for Online Applications Text Retrieval, Extraction and Categorization Peter Jackson Isabelle Moulinier omson Legal & Regulatory John Benjamins Publishing Company Amsterdam / iladelia TM The paper used in this publication meets the minimum requirements of American 8 National Standard for Information Sciences – Permanence of Paper for Printed Library Materials, ansi z39.48-1984. Library of Congress Cataloging-in-Publication Data Jackson, Peter, 1948- Natural language processing for online applications : text retrieval, extraction, and categorization / Peter Jackson, Isabelle Moulinier. p.cm.(Natural Language Processing, issn 1567–8202 ; v.5) Includes bibliographical references and index.
    [Show full text]
  • An Overview of Geotagging Geospatial Location
    An Overview of Geotagging Geospatial location Archival collections may include a huge amount of original documents which contain geospa- tial location (toponym) data. Scientists, researchers and archivists are aware of the value of this information, and various tools have been developed to extract this key data from archival records (Kemp, 2008). Geospatial identification metadata generally involves latitude and longitude coordi- nates, coded placenames and data sources, etc. (Hunter, 2012). Tagging Tagging, a natural result of social media and semantic web technologies, involves labelling content with arbitrarily selected tags. Geotags, which describe the geospatial information of the content, are one of the most common online tag types (Hu & Ge, 2008), (Intagorn , et al., 2010). Geotagging Geotagging refers to the process of adding geospatial identification metadata to various types of media such as e-books, narrative documents, web content, images and video and social media applications (Facebook, Twitter, Foursquare etc) (Hunter, 2012). With various social web tools and applications, users can add geospatial information to web content, photographs, audio and video. Using hardware integrated with Global Positioning System (GPS) receivers, this can be done automatically (Hu & Ge, 2008). 2 Social media tools with geo-tagging capabilities Del.icio.us1 A social bookmarking site that allows users to save and tag web pages and resources. CiteULike2 An online service to organise academic publications that allows users to tag academic papers and books. Twitter3 A real-time micro-blogging platform/information network that allows users all over the world to share and discover what is happening. Users can publish their geotag while tweeting. Flickr4 A photo-sharing service that allows users to store, tag and geotag their personal photos, maintain a network of contacts and tag others photos.
    [Show full text]
  • Text Mining Methodologies with R: an Application to Central Bank Texts
    Text Mining Methodologies with R: An Application to Central Bank Texts Jonathan Benchimol,† Sophia Kazinnik‡ and Yossi Saadon§ March 1, 2021 We review several existing methodologies in text analysis and explain formal processes of text analysis using the open-source software R and relevant packages. We present some technical applications of text mining methodologies comprehen- sively to economists. 1 Introduction A large and growing amount of unstructured data is available nowadays. Most of this information is text-heavy, including articles, blog posts, tweets and more for- mal documents (generally in Adobe PDF or Microsoft Word formats). This avail- ability presents new opportunities for researchers, as well as new challenges for institutions. In this paper, we review several existing methodologies in analyzing text and describe a formal process of text analytics using open-source software R. Besides, we discuss potential empirical applications. This paper is a primer on how to systematically extract quantitative informa- tion from unstructured or semi-structured data (texts). Text mining, the quanti- tative representation of text, has been widely used in disciplines such as political science, media, and security. However, an emerging body of literature began to apply it to the analysis of macroeconomic issues, studying central bank commu- This paper does not necessarily reflect the views of the Bank of Israel, the Federal Reserve Bank of Richmond or the Federal Reserve System. The present paper serves as the technical appendix of our research paper (Benchimol et al., 2020). We thank Itamar Caspi, Shir Kamenetsky Yadan, Ariel Mansura, Ben Schreiber, and Bar Weinstein for their productive comments.
    [Show full text]
  • Effective Extraction of Small Data from Large Database by Using Data Mining Technique
    International Journal of Scientific & Engineering Research, Volume 7, Issue 4, April-2016 372 ISSN 2229-5518 Effective Extraction of Small Data from Large Database by using Data mining Technique. Mr. Parag Satish Kulkarni Miss. Prajakta Arjun Jejure B.E, A.M.I.E, D.M.E, B.com, PGDOM, B.E, M.B.A Student M.B.A, Student Department of Information Technology Department of Operations Management, K.K.W.I.E.R COE, Symbiosis Institute of Operations Management Savitribai Phule Pune University. Symbiosis International University. Abstract: The demand for extracting meaningful patterns in various applications is very necessary. Data mining is the process of automatically extracting meaningful patterns from usually very large quantities of seemingly unrelated data. When used in conjunction with the appropriate visualization tools, data mining allows the researcher to use highly advanced pattern-recognition skills and knowledge of molecular biology to determine which results warrant further study. Data mining is an automated means of reducing the complexity of data in large bioinformatics databases and of discovering meaningful, useful patterns and relationships in data. Data mining is one stage in an overall knowledge-discovery process. It is an iterative process in which preceding processes are modified to support new hypotheses suggested by the data. The process of data mining is concerned with extracting patterns from the data, typically using classification, regression, link analysis, and segmentation or deviation detection. Keywords: KDD, Computational process, Artificial Intelligence, Data pre processing, Data mining. —————————— —————————— Introduction: Data mining is the process of creating insightful, interesting, and novel patterns, also descriptive, predictive modelsIJSER and understandable from large size data.
    [Show full text]
  • IR and AI: Traditions of Representation and Anti-Representation in Information Processing
    IR and AI: traditions of representation and anti-representation in information processing Yorick Wilks University of Sheffield Introduction Artificial Intelligence (AI), or at least non-Connectionist non-statistical AI, remains wedded to representations, their computational tractability and their explanatory power; and that normally means the representation of propositions in some more or less logical form. Classical Information Retrieval (IR), on the other hand, often characterised as a "bag of words" approach to text, consists of methods for locating document content independent of any particular explicit structure in the data. Mainstream IR is, if not dogmatically anti-representational (as are some statistical and neural net-related areas of AI and language processing), at least not committed to any notion of representation beyond what is given by a set of index terms, or strings of index terms along with figures themselves computed from text that may specify clusters, vectors or other derived structures. This intellectual divide over representations and their function goes back at least to the Chomsky versus Skinner debate, which was always presented by Chomsky in terms of representationalists versus barbarians, but was in fact about simple and numerically-based structures versus slightly more complex ones. Bizarre changes of allegiance took place during later struggles over the same issue, as when IBM created a machine translation (MT) system (CANDIDE, see Brown and Cocke, 1989) based purely on text statistics and without any linguistic
    [Show full text]
  • A Survey on Deep Learning for Named Entity Recognition
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2020 1 A Survey on Deep Learning for Named Entity Recognition Jing Li, Aixin Sun, Jianglei Han, and Chenliang Li Abstract—Named entity recognition (NER) is the task to identify mentions of rigid designators from text belonging to predefined semantic types such as person, location, organization etc. NER always serves as the foundation for many natural language applications such as question answering, text summarization, and machine translation. Early NER systems got a huge success in achieving good performance with the cost of human engineering in designing domain-specific features and rules. In recent years, deep learning, empowered by continuous real-valued vector representations and semantic composition through nonlinear processing, has been employed in NER systems, yielding stat-of-the-art performance. In this paper, we provide a comprehensive review on existing deep learning techniques for NER. We first introduce NER resources, including tagged NER corpora and off-the-shelf NER tools. Then, we systematically categorize existing works based on a taxonomy along three axes: distributed representations for input, context encoder, and tag decoder. Next, we survey the most representative methods for recent applied techniques of deep learning in new NER problem settings and applications. Finally, we present readers with the challenges faced by NER systems and outline future directions in this area. Index Terms—Natural language processing, named entity recognition, deep learning, survey ✦ 1 INTRODUCTION those entities for which one or many rigid designators stands for the referent. Rigid designator, defined in [16], include AMED Entity Recognition (NER) aims to recognize proper names and natural kind terms like biological species N mentions of rigid designators from text belonging to and substances.
    [Show full text]
  • Psydex DARPA Proposal for Social Media Monitoring
    PROPOSAL: VOLUME 1 DARPA-BAA-11-64 SOCIAL MEDIA IN STRATEGIC COMMUNICATION (SMISC) TECHNICAL AREA 1 (TA 1): ALGORITHM/SOFTWARE DEVELOPMENT VOLUME 1: Technical and Management Proposal (includes Appendix A) PROPOSAL TITLE: Memetics and Media Enhanced Tracking System (METSYS) LEAD ORGANIZATION: Center for Advanced Defense Studies, Inc. TYPE OF BUSINESS: Other Nonprofit TEAM MEMBERS: Psydex Corporation (Other Small Business) Behavioral Media Networks (Other Small Business) Boston University (Other Educational) POINTS OF CONTACT: TECHNICAL ADMINISTRATIVE LTC (Ret.) David E.A. Johnson, USA Mr. Farley Mesko 10 G St NE, STE 610 10 G St NE, STE 610 Washington, DC 20002 Washington, DC 20002 P: 202-289-3332 P: 202-289-3332 F: 202-789-2786 F: 202-789-2786 [email protected] [email protected] PROPOSAL SUBMITTED ON: 12 SEP PLACES AND PERIODS OF 2011 PERFORMANCE: PROPOSAL VALIDITY PERIOD: 120 Center for Advanced Defense Studies, Inc. days Washington, DC (15 DEC 11-15 JAN 15) Cambridge, MA (15 DEC 11-15 JAN 15) DUNS: 192561012 Chicago, IL (15 DEC 11-15 JAN 15) TIN: 73-1681366 Psydex Corporation Atlanta, GA (15 DEC 11-15 JAN 15) CAGE: 4MT61 Behavioral Media Networks AWARD INSTRUMENT REQUESTED: Cambridge, MA (15 DEC 11-15 JAN 15) Grant Boston University COST OF PROJECT: $11,262,992 Boston, MA (15 DEC 11-15 JAN 15) Contents 1. Executive Summary ..................................................................................................................................... 1 2. Goals and Impact ........................................................................................................................................
    [Show full text]
  • Adaptive Information Extraction
    Adaptive Information Extraction JORDI TURMO, ALICIA AGENO, NEUS CATALA` TALP Research Center, Universitat Polit`ecnica de Catalunya, Spain Abstract The growing availability of on-line textual sources and the potential number of applications of knowledge acquisition from textual data has lead to an increase in Information Extraction (IE) research. Some examples of these applications are the generation of data bases from documents, as well as the acquisition of knowledge useful for emerging technologies like question answering, informa- tion integration, and others related to text mining. However, one of the main drawbacks of the application of IE refers to its intrinsic domain dependence. For the sake of reducing the high cost of manually adapting IE applications to new domains, experiments with different Machine Learning (ML) techniques have been carried out by the research community. This survey describes and compares the main approaches to IE and the different ML techniques used to achieve Adaptive IE technology. 1 Introduction Traditionally, information involved in Knowledge-Based systems has been man- ually acquired in collaboration with domain experts. However, both the high cost of such a process and the existence of textual sources containing the re- quired information have led to the use of automatic acquisition approaches. In the early eighties, Text-Based Intelligent (TBI) systems began to manipulate text so as to automatically obtain relevant information in a fast, effective and helpful manner Jacobs [1992]. Texts are usually highly structured when pro- duced to be used by a computer, and the process of extracting information from them can be carried out in a straightforward manner.
    [Show full text]
  • Daily Text Analytics of News and Social Media with Power BI
    Daily Text Analytics of News and Social Media with Power BI Jannatun Nahar A Capstone Project Submitted to the University of North Carolina Wilmington in Partial Fulfillment of the Requirements for the Degree of Master of Science Department of Computer Science Department of Information Systems and Operations Management University of North Carolina Wilmington December 2019 Advisory Committee Dr. Minoo Modaresnezhad Dr. Lucas Layman Committee Member Committee Member Dr. Douglas Kline, Chair Table of Contents List of Figures ................................................................................................................................. 3 ABSTRACT .................................................................................................................................... 5 Chapter 1: Introduction ................................................................................................................... 6 1.1: Objectives ............................................................................................................................ 6 1.2: Business Case of the Application ........................................................................................ 7 Chapter 2 : System Analysis ........................................................................................................... 9 2.1: Introduction to Power BI ..................................................................................................... 9 2.1.1: Common Uses of Power BI .........................................................................................
    [Show full text]