Techniques of Semantic Analysis for Natural Language Processing – a Detailed Survey
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Some Strands of Wittgenstein's Normative Pragmatism
Some Strands of Wittgenstein’s Normative Pragmatism, and Some Strains of his Semantic Nihilism ROBERT B. BRANDOM ABSTRACT WORK TYPE In this reflection I address one of the critical questions this monograph is Article about: How to justify proposing yet another semantic theory in the light of Wittgenstein’s strong warnings against it. I see two clear motives for ARTICLE HISTORY Wittgenstein’s semantic nihilism. The first one is the view that philosophical Received: problems arise from postulating hypothetical entities such as ‘meanings’. To 27–January–2018 dissolve the philosophical problems rather than create new ones, Wittgenstein Accepted: suggests substituting ‘meaning’ with ‘use’ and avoiding scientism in philosophy 3–March–2018 together with the urge to penetrate in one's investigation to unobservable depths. I believe this first motive constitutes only a weak motive for ARTICLE LANGUAGE Wittgenstein’s quietism, because there are substantial differences between English empirical theories in natural sciences and semantic theories in philosophy that KEYWORDS leave Wittgenstein’s assimilation of both open to criticism. But Wittgenstein is Meaning and Use right, on the second motive, that given the dynamic character of linguistic Hypothetical Entities practice, the classical project of semantic theory is a disease that can be Antiscientism removed or ameliorated only by heeding the advice to replace concern with Semantic Nihilism meaning by concern with use. On my view, this does not preclude, however, a Linguistic Dynamism different kind of theoretical approach to meaning that avoids the pitfalls of the Procrustean enterprise Wittgenstein complained about. © Studia Humanitatis – Universidad de Salamanca 2019 R. Brandom (✉) Disputatio. Philosophical Research Bulletin University of Pittsburgh, USA Vol. -
Semantic Analysis of the First Cities from a Deconstruction Perspective Doi: 10.23968/2500-0055-2020-5-3-43-48
Mojtaba Valibeigi, Faezeh Ashuri— Pages 43–48 SEMANTIC ANALYSIS OF THE FIRST CITIES FROM A DECONSTRUCTION PERSPECTIVE DOI: 10.23968/2500-0055-2020-5-3-43-48 SEMANTIC ANALYSIS OF THE FIRST CITIES FROM A DECONSTRUCTION PERSPECTIVE Mojtaba Valibeigi*, Faezeh Ashuri Buein Zahra Technical University Imam Khomeini Blvd, Buein Zahra, Qazvin, Iran *Corresponding author: [email protected] Abstract Introduction: Deconstruction is looking for any meaning, semantics and concepts and then shows how all of them seem to lead to chaos, and are always on the border of meaning duality. Purpose of the study: The study is aimed to investigate urban identity from a deconstruction perspective. Since two important bases of deconstruction are text and meaning and their relationships, we chose the first cities on a symbolic level as a text and tried to analyze their meanings. Methods: The study used a deductive content analysis in three steps including preparation, organization and final report or conclusion. In the first step, we argued deconstruction philosophy based on Derrida’s views accordingly. Then we determined some common semantic features of the first cities. Finally, we presented some conclusions based on a semantic interpretation of the first cities’ identity.Results: It seems that all cities as texts tend to provoke a special imaginary meaning, while simultaneously promoting and emphasizing the opposite meaning of what they want to show. Keywords Deconstruction, text, meaning, urban semantics, the first cities. Introduction to have a specific meaning (Gualberto and Kress, 2019; Expressions are an act of recognizing or displaying Leone, 2019; Stojiljković and Ristić Trajković, 2018). -
Probabilistic Topic Modelling with Semantic Graph
Probabilistic Topic Modelling with Semantic Graph B Long Chen( ), Joemon M. Jose, Haitao Yu, Fajie Yuan, and Huaizhi Zhang School of Computing Science, University of Glasgow, Sir Alwyns Building, Glasgow, UK [email protected] Abstract. In this paper we propose a novel framework, topic model with semantic graph (TMSG), which couples topic model with the rich knowledge from DBpedia. To begin with, we extract the disambiguated entities from the document collection using a document entity linking system, i.e., DBpedia Spotlight, from which two types of entity graphs are created from DBpedia to capture local and global contextual knowl- edge, respectively. Given the semantic graph representation of the docu- ments, we propagate the inherent topic-document distribution with the disambiguated entities of the semantic graphs. Experiments conducted on two real-world datasets show that TMSG can significantly outperform the state-of-the-art techniques, namely, author-topic Model (ATM) and topic model with biased propagation (TMBP). Keywords: Topic model · Semantic graph · DBpedia 1 Introduction Topic models, such as Probabilistic Latent Semantic Analysis (PLSA) [7]and Latent Dirichlet Analysis (LDA) [2], have been remarkably successful in ana- lyzing textual content. Specifically, each document in a document collection is represented as random mixtures over latent topics, where each topic is character- ized by a distribution over words. Such a paradigm is widely applied in various areas of text mining. In view of the fact that the information used by these mod- els are limited to document collection itself, some recent progress have been made on incorporating external resources, such as time [8], geographic location [12], and authorship [15], into topic models. -
A Way with Words: Recent Advances in Lexical Theory and Analysis: a Festschrift for Patrick Hanks
A Way with Words: Recent Advances in Lexical Theory and Analysis: A Festschrift for Patrick Hanks Gilles-Maurice de Schryver (editor) (Ghent University and University of the Western Cape) Kampala: Menha Publishers, 2010, vii+375 pp; ISBN 978-9970-10-101-6, e59.95 Reviewed by Paul Cook University of Toronto In his introduction to this collection of articles dedicated to Patrick Hanks, de Schryver presents a quote from Atkins referring to Hanks as “the ideal lexicographer’s lexicogra- pher.” Indeed, Hanks has had a formidable career in lexicography, including playing an editorial role in the production of four major English dictionaries. But Hanks’s achievements reach far beyond lexicography; in particular, Hanks has made numerous contributions to computational linguistics. Hanks is co-author of the tremendously influential paper “Word association norms, mutual information, and lexicography” (Church and Hanks 1989) and maintains close ties to our field. Furthermore, Hanks has advanced the understanding of the relationship between words and their meanings in text through his theory of norms and exploitations (Hanks forthcoming). The range of Hanks’s interests is reflected in the authors and topics of the articles in this Festschrift; this review concentrates more on those articles that are likely to be of most interest to computational linguists, and does not discuss some articles that appeal primarily to a lexicographical audience. Following the introduction to Hanks and his career by de Schryver, the collection is split into three parts: Theoretical Aspects and Background, Computing Lexical Relations, and Lexical Analysis and Dictionary Writing. 1. Part I: Theoretical Aspects and Background Part I begins with an unfinished article by the late John Sinclair, in which he begins to put forward an argument that multi-word units of meaning should be given the same status in dictionaries as ordinary headwords. -
Employee Matching Using Machine Learning Methods
Master of Science in Computer Science May 2019 Employee Matching Using Machine Learning Methods Sumeesha Marakani Faculty of Computing, Blekinge Institute of Technology, 371 79 Karlskrona, Sweden This thesis is submitted to the Faculty of Computing at Blekinge Institute of Technology in partial fulfillment of the requirements for the degree of Master of Science in Computer Science. The thesis is equivalent to 20 weeks of full time studies. The authors declare that they are the sole authors of this thesis and that they have not used any sources other than those listed in the bibliography and identified as references. They further declare that they have not submitted this thesis at any other institution to obtain a degree. Contact Information: Author(s): Sumeesha Marakani E-mail: [email protected] University advisor: Prof. Veselka Boeva Department of Computer Science External advisors: Lars Tornberg [email protected] Daniel Lundgren [email protected] Faculty of Computing Internet : www.bth.se Blekinge Institute of Technology Phone : +46 455 38 50 00 SE–371 79 Karlskrona, Sweden Fax : +46 455 38 50 57 Abstract Background. Expertise retrieval is an information retrieval technique that focuses on techniques to identify the most suitable ’expert’ for a task from a list of individ- uals. Objectives. This master thesis is a collaboration with Volvo Cars to attempt ap- plying this concept and match employees based on information that was extracted from an internal tool of the company. In this tool, the employees describe themselves in free flowing text. This text is extracted from the tool and analyzed using Natural Language Processing (NLP) techniques. -
Matrix Decompositions and Latent Semantic Indexing
Online edition (c)2009 Cambridge UP DRAFT! © April 1, 2009 Cambridge University Press. Feedback welcome. 403 Matrix decompositions and latent 18 semantic indexing On page 123 we introduced the notion of a term-document matrix: an M N matrix C, each of whose rows represents a term and each of whose column× s represents a document in the collection. Even for a collection of modest size, the term-document matrix C is likely to have several tens of thousands of rows and columns. In Section 18.1.1 we first develop a class of operations from linear algebra, known as matrix decomposition. In Section 18.2 we use a special form of matrix decomposition to construct a low-rank approximation to the term-document matrix. In Section 18.3 we examine the application of such low-rank approximations to indexing and retrieving documents, a technique referred to as latent semantic indexing. While latent semantic in- dexing has not been established as a significant force in scoring and ranking for information retrieval, it remains an intriguing approach to clustering in a number of domains including for collections of text documents (Section 16.6, page 372). Understanding its full potential remains an area of active research. Readers who do not require a refresher on linear algebra may skip Sec- tion 18.1, although Example 18.1 is especially recommended as it highlights a property of eigenvalues that we exploit later in the chapter. 18.1 Linear algebra review We briefly review some necessary background in linear algebra. Let C be an M N matrix with real-valued entries; for a term-document matrix, all × RANK entries are in fact non-negative. -
Finetuning Pre-Trained Language Models for Sentiment Classification of COVID19 Tweets
Technological University Dublin ARROW@TU Dublin Dissertations School of Computer Sciences 2020 Finetuning Pre-Trained Language Models for Sentiment Classification of COVID19 Tweets Arjun Dussa Technological University Dublin Follow this and additional works at: https://arrow.tudublin.ie/scschcomdis Part of the Computer Engineering Commons Recommended Citation Dussa, A. (2020) Finetuning Pre-trained language models for sentiment classification of COVID19 tweets,Dissertation, Technological University Dublin. doi:10.21427/fhx8-vk25 This Dissertation is brought to you for free and open access by the School of Computer Sciences at ARROW@TU Dublin. It has been accepted for inclusion in Dissertations by an authorized administrator of ARROW@TU Dublin. For more information, please contact [email protected], [email protected]. This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 4.0 License Finetuning Pre-trained language models for sentiment classification of COVID19 tweets Arjun Dussa A dissertation submitted in partial fulfilment of the requirements of Technological University Dublin for the degree of M.Sc. in Computer Science (Data Analytics) September 2020 Declaration I certify that this dissertation which I now submit for examination for the award of MSc in Computing (Data Analytics), is entirely my own work and has not been taken from the work of others save and to the extent that such work has been cited and acknowledged within the test of my work. This dissertation was prepared according to the regulations for postgraduate study of the Technological University Dublin and has not been submitted in whole or part for an award in any other Institute or University. -
Lexical Analysis
From words to numbers Parsing, tokenization, extract information Natural Language Processing Piotr Fulmański Lecture goals • Tokenizing your text into words and n-grams (tokens) • Compressing your token vocabulary with stemming and lemmatization • Building a vector representation of a statement Natural Language Processing in Action by Hobson Lane Cole Howard Hannes Max Hapke Manning Publications, 2019 Terminology Terminology • A phoneme is a unit of sound that distinguishes one word from another in a particular language. • In linguistics, a word of a spoken language can be defined as the smallest sequence of phonemes that can be uttered in isolation with objective or practical meaning. • The concept of "word" is usually distinguished from that of a morpheme. • Every word is composed of one or more morphemes. • A morphem is the smallest meaningful unit in a language even if it will not stand on its own. A morpheme is not necessarily the same as a word. The main difference between a morpheme and a word is that a morpheme sometimes does not stand alone, but a word, by definition, always stands alone. Terminology SEGMENTATION • Text segmentation is the process of dividing written text into meaningful units, such as words, sentences, or topics. • Word segmentation is the problem of dividing a string of written language into its component words. Text segmentation, retrieved 2020-10-20, https://en.wikipedia.org/wiki/Text_segmentation Terminology SEGMENTATION IS NOT SO EASY • Trying to resolve the question of what a word is and how to divide up text into words we can face many "exceptions": • Is “ice cream” one word or two to you? Don’t both words have entries in your mental dictionary that are separate from the compound word “ice cream”? • What about the contraction “don’t”? Should that string of characters be split into one or two “packets of meaning?” • The single statement “Don’t!” means “Don’t you do that!” or “You, do not do that!” That’s three hidden packets of meaning for a total of five tokens you’d like your machine to know about. -
Linguistic Relativity Hyp
THE LINGUISTIC RELATIVITY HYPOTHESIS by Michele Nathan A Thesis Submitted to the Faculty of the College of Social Science in Partial Fulfillment of the Requirements for the Degree of Master of Arts Florida Atlantic University Boca Raton, Florida December 1973 THE LINGUISTIC RELATIVITY HYPOTHESIS by Michele Nathan This thesis was prepared under the direction of the candidate's thesis advisor, Dr. John D. Early, Department of Anthropology, and has been approved by the members of his supervisory committee. It was submitted to the faculty of the College of Social Science and was accepted in partial fulfillment of the requirements for the degree of Master of Arts. SUPERVISORY COMMITTEE: &~ rl7 IC?13 (date) 1 ii ABSTRACT Author: Michele Nathan Title: The Linguistic Relativity Hypothesis Institution: Florida Atlantic University Degree: Master of Arts Year: 1973 Although interest in the linguistic relativity hypothesis seems to have waned in recent years, this thesis attempts to assess the available evidence supporting it in order to show that further investigation of the hypothesis might be most profitable. Special attention is paid to the fact that anthropology has largely failed to substantiate any claims that correlations between culture and the semantics of language do exist. This has been due to the impressionistic nature of the studies in this area. The use of statistics and hypothesis testing to provide mor.e rigorous methodology is discussed in the hope that employing such paradigms would enable anthropology to contribute some sound evidence regarding t~~ hypothesis. iii TABLE OF CONTENTS Page Introduction • 1 CHAPTER I THE.HISTORY OF THE FORMULATION OF THE HYPOTHESIS. -
Latent Semantic Analysis for Text-Based Research
Behavior Research Methods, Instruments, & Computers 1996, 28 (2), 197-202 ANALYSIS OF SEMANTIC AND CLINICAL DATA Chaired by Matthew S. McGlone, Lafayette College Latent semantic analysis for text-based research PETER W. FOLTZ New Mexico State University, Las Cruces, New Mexico Latent semantic analysis (LSA) is a statistical model of word usage that permits comparisons of se mantic similarity between pieces of textual information. This papersummarizes three experiments that illustrate how LSA may be used in text-based research. Two experiments describe methods for ana lyzinga subject's essay for determining from what text a subject learned the information and for grad ing the quality of information cited in the essay. The third experiment describes using LSAto measure the coherence and comprehensibility of texts. One of the primary goals in text-comprehension re A theoretical approach to studying text comprehen search is to understand what factors influence a reader's sion has been to develop cognitive models ofthe reader's ability to extract and retain information from textual ma representation ofthe text (e.g., Kintsch, 1988; van Dijk terial. The typical approach in text-comprehension re & Kintsch, 1983). In such a model, semantic information search is to have subjects read textual material and then from both the text and the reader's summary are repre have them produce some form of summary, such as an sented as sets ofsemantic components called propositions. swering questions or writing an essay. This summary per Typically, each clause in a text is represented by a single mits the experimenter to determine what information the proposition. -
How Latent Is Latent Semantic Analysis?
How Latent is Latent Semantic Analysis? Peter Wiemer-Hastings University of Memphis Department of Psychology Campus Box 526400 Memphis TN 38152-6400 [email protected]* Abstract In the late 1980's, a group at Bellcore doing re- search on information retrieval techniques developed a Latent Semantic Analysis (LSA) is a statisti• statistical, corpus-based method for retrieving texts. cal, corpus-based text comparison mechanism Unlike the simple techniques which rely on weighted that was originally developed for the task of matches of keywords in the texts and queries, their information retrieval, but in recent years has method, called Latent Semantic Analysis (LSA), cre• produced remarkably human-like abilities in a ated a high-dimensional, spatial representation of a cor• variety of language tasks. LSA has taken the pus and allowed texts to be compared geometrically. Test of English as a Foreign Language and per• In the last few years, several researchers have applied formed as well as non-native English speakers this technique to a variety of tasks including the syn• who were successful college applicants. It has onym section of the Test of English as a Foreign Lan• shown an ability to learn words at a rate sim• guage [Landauer et al., 1997], general lexical acquisi• ilar to humans. It has even graded papers as tion from text [Landauer and Dumais, 1997], selecting reliably as human graders. We have used LSA texts for students to read [Wolfe et al., 1998], judging as a mechanism for evaluating the quality of the coherence of student essays [Foltz et a/., 1998], and student responses in an intelligent tutoring sys• the evaluation of student contributions in an intelligent tem, and its performance equals that of human tutoring environment [Wiemer-Hastings et al., 1998; raters with intermediate domain knowledge. -
Navigating Dbpedia by Topic Tanguy Raynaud, Julien Subercaze, Delphine Boucard, Vincent Battu, Frederique Laforest
Fouilla: Navigating DBpedia by Topic Tanguy Raynaud, Julien Subercaze, Delphine Boucard, Vincent Battu, Frederique Laforest To cite this version: Tanguy Raynaud, Julien Subercaze, Delphine Boucard, Vincent Battu, Frederique Laforest. Fouilla: Navigating DBpedia by Topic. CIKM 2018, Oct 2018, Turin, Italy. hal-01860672 HAL Id: hal-01860672 https://hal.archives-ouvertes.fr/hal-01860672 Submitted on 23 Aug 2018 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Fouilla: Navigating DBpedia by Topic Tanguy Raynaud, Julien Subercaze, Delphine Boucard, Vincent Battu, Frédérique Laforest Univ Lyon, UJM Saint-Etienne, CNRS, Laboratoire Hubert Curien UMR 5516 Saint-Etienne, France [email protected] ABSTRACT only the triples that concern this topic. For example, a user is inter- Navigating large knowledge bases made of billions of triples is very ested in Italy through the prism of Sports while another through the challenging. In this demonstration, we showcase Fouilla, a topical prism of Word War II. For each of these topics, the relevant triples Knowledge Base browser that offers a seamless navigational expe- of the Italy entity differ. In such circumstances, faceted browsing rience of DBpedia. We propose an original approach that leverages offers no solution to retrieve the entities relative to a defined topic both structural and semantic contents of Wikipedia to enable a if the knowledge graph does not explicitly contain an adequate topic-oriented filter on DBpedia entities.