
Survey of Computational Approaches to Lexical Semantic Change Detection Nina Tahmasebi∗ Lars Borin∗∗ University of Gothenburg University of Gothenburg Adam Jatowty Kyoto University Our languages are in constant flux driven by external factors such as cultural, societal and techno- logical changes, as well as by only partially understood internal motivations. Words acquire new meanings and lose old senses, new words are coined or borrowed from other languages and obsolete words slide into obscurity. Understanding the characteristics of shifts in the meaning and in the use of words is useful for those who work with the content of historical texts, the interested general public, but also in and of itself. The findings from automatic lexical semantic change detection, and the models of diachronic con- ceptual change are also currently being incorporated in approaches for measuring document across-time similarity, information retrieval from long-term document archives, the design of OCR algorithms, and so on. In recent years we have seen a surge in interest in the academic community in computational methods and tools supporting inquiry into diachronic conceptual change and lexical replacement. This article provides a comprehensive survey of recent computational techniques to tackle both. 1. Introduction Vocabulary change has long been a topic of interest to linguists and the general public alike. This is not surprising considering the central role of language in all human spheres of activity, together with the fact that words are its most salient elements. Thus it is natural that we want to know the “stories of the words we use” including when and how words came to possess the senses they currently have as well as what currently unused senses they had in the past. Professionals and the general public are interested in the origins and the history of our language as testified to by numerous books on semantic change aimed at a wide readership. Traditionally, vocabulary change has been studied by linguists and other scholars in the humanities and social sciences with manual, “close-reading” approaches. While this is still largely arXiv:1811.06278v2 [cs.CL] 13 Mar 2019 the case inside linguistics, recently we have seen proposals, originating primarily from computa- tional linguistics and computer science, for how semi-automatic and automatic methods could be used to scale up and enhance this research. Indeed, over the last two decades we have observed a surge of research papers dealing with detection of lexical semantic changes and formulation of generalizations about them, based on datasets spanning decades or centuries. With the digitization of historical documents going on apace in many different contexts, accounting for vocabulary change has also become a concern in the design of information access systems for this rapidly growing body of texts. At the same time, as a result, large scale corpora are available allowing the testing of computational approaches for related tasks and providing quantitative support to proposals of various hypotheses. ∗ Box 200, SE-405 30 Gothenburg, Sweden, E-mail: [email protected] ∗∗ Box 200, SE-405 30 Gothenburg, Sweden, E-mail: [email protected] y Yoshida-honmachi, Sakyo-ku, Kyoto 606-8501 Japan, E-mail: [email protected] © 2018 Association for Computational Linguistics Computational Linguistics Volume 1, Number 1 Despite the recent increase in research using computational approaches to investigate lexical semantic changes, the community lacks a solid and extensive overview of this growing field. The aim of the present survey is to fill this gap. While we were preparing this survey article, two related surveys appeared, illustrating the timeliness of the topic. The survey by Kutuzov et al.(2018) has a narrower scope, focusing entirely on diachronic word embeddings. The broader survey presented by Tang(2018) covers much of the same field as ours in terms of computational linguistics work, but provides considerably less discussion of the connections and relevance of this work to linguistic research. A clear aim in preparing our presentation has been to anchor it firmly in mainstream historical linguistics and lexical typology, the two linguistics subdisciplines most relevant to our survey. Further, the application of computational methods to the study of language change has gained popularity in recent years. Relevant work can be found not only in traditional linguistics venues, but can be found in journals and conference proceedings representing a surprising variety of disciplines, even outside the humanities and social sciences. Consequently, another central aim of this survey has been to provide pointers into this body of research, which often utilizes datasets and applies methods originating in computational linguistics research. Finally, our main concern here is with computational linguistic studies of vocabulary change utilizing empirical diachronic (corpus) data. We have not attempted to survey a notable and relevant complementary strand of computational work aiming to simulate historical processes in language, including lexical change (see Baker 2008 for an overview). The work surveyed here falls into two broad categories. One is the modeling and study of diachronic conceptual change (i.e., how the meanings of words change in a language over shorter or longer time spans). This strand of computational linguistic research is closely connected to corresponding efforts in linguistics, often referring to them and suggesting new insights based on large-scale computational studies, (e.g., in the form of “laws of semantic change”). This work is surveyed in Section3, and split into one section on word-level change in Section 3.1, and one on sense-differentiated change in Section 3.2. The word-level change detection considers both count-based context methods as well as those based on neural embeddings, while the sense- differentiated change covers topic modeling-based methods, clustering-based, and word sense induction-based methods. The other strand of work focuses on lexical replacement, where different words express the same meaning over time. This is not traditionally a specific field in linguistics, but it presents obvious complications for access to historical text archives, where relevant information may be retrievable only through an obsolete label for an entity or phenomenon. This body of work is presented in Section4. The terminology and conceptual apparatus used in works on lexical semantic change are multifarious and not consistent over different fields or often even within the same discipline. For this reason, we provide a brief background description of relevant linguistic work in Section5. Much current work in computational linguistics depends crucially on (formal, automatic, quantitative, and reproducible) evaluation. Given the different aims of the surveyed research, evaluation procedures will look correspondingly different. We devote Section6 to a discussion of general methodological issues and evaluation. A characteristic feature of computational linguistics, is the close connection to concrete computer applications. In Section7 we describe a range of relevant applications presented in the literature, both more research-oriented (e.g., visualizations of vocabulary change) and down- stream applications, typically information access systems. We end with a summary of the main points garnered from our literature survey, and provide a conclusion and some recommendations for future work (Section8). We believe our survey can be helpful for both researchers already working on related topics as well as for those new to this field, for example, for PhD candidates who wish to quickly grasp the recent advances in the field and pinpoint promising research opportunities and directions. 2 Nina Tahmasebi et al. Survey of Computational Approaches to Lexical Semantic Change 2. Conceptual Framework and Application 2.1 Model and Classification of Language Change Lexical change can be seen as a special case of lexical variation, which can be attributable to many different linguistic and extralinguistic factors. In other words, we see the task of establishing that we are dealing with variants of the same item (in some relevant sense) – items of form or content – as logically separate from establishing that the variation is classifiable as lexical change. However, in language, form and function are always interdependent (this assumption de- fines the core of the Saussurean linguistic sign), so in actual practice, we cannot recognize linguistic forms in the absence of their meanings and vice versa. This principle is not invalidated by the fact that many orthographies provide (partial) analyses of the form–meaning complexes, in the form of word spacing or different word initial, final and medial shapes of letters, etc. There are still many cases where orthographic conventions do not provide enough clues, a notable example being multiword expressions, and in such cases tokenization simply does not provide all and only the lexical forms present in the text (Dridan and Oepen 2012). Investigation of lexical change is further complicated by the fact that – as just noted – observed variation in lexical form between different text materials need not be due to diachronic causes at all, even if the materials happen to be from different time periods. Linguists are well aware that even seen as a synchronic entity, language is full of variation at all linguistic levels. In spoken language, this kind of variation
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages56 Page
-
File Size-