Survey of Computational Approaches to Lexical Semantic Change Detection

Total Page:16

File Type:pdf, Size:1020Kb

Survey of Computational Approaches to Lexical Semantic Change Detection Survey of Computational Approaches to Lexical Semantic Change Detection Nina Tahmasebi∗ Lars Borin∗∗ University of Gothenburg University of Gothenburg Adam Jatowty Kyoto University Our languages are in constant flux driven by external factors such as cultural, societal and techno- logical changes, as well as by only partially understood internal motivations. Words acquire new meanings and lose old senses, new words are coined or borrowed from other languages and obsolete words slide into obscurity. Understanding the characteristics of shifts in the meaning and in the use of words is useful for those who work with the content of historical texts, the interested general public, but also in and of itself. The findings from automatic lexical semantic change detection, and the models of diachronic con- ceptual change are also currently being incorporated in approaches for measuring document across-time similarity, information retrieval from long-term document archives, the design of OCR algorithms, and so on. In recent years we have seen a surge in interest in the academic community in computational methods and tools supporting inquiry into diachronic conceptual change and lexical replacement. This article provides a comprehensive survey of recent computational techniques to tackle both. 1. Introduction Vocabulary change has long been a topic of interest to linguists and the general public alike. This is not surprising considering the central role of language in all human spheres of activity, together with the fact that words are its most salient elements. Thus it is natural that we want to know the “stories of the words we use” including when and how words came to possess the senses they currently have as well as what currently unused senses they had in the past. Professionals and the general public are interested in the origins and the history of our language as testified to by numerous books on semantic change aimed at a wide readership. Traditionally, vocabulary change has been studied by linguists and other scholars in the humanities and social sciences with manual, “close-reading” approaches. While this is still largely arXiv:1811.06278v2 [cs.CL] 13 Mar 2019 the case inside linguistics, recently we have seen proposals, originating primarily from computa- tional linguistics and computer science, for how semi-automatic and automatic methods could be used to scale up and enhance this research. Indeed, over the last two decades we have observed a surge of research papers dealing with detection of lexical semantic changes and formulation of generalizations about them, based on datasets spanning decades or centuries. With the digitization of historical documents going on apace in many different contexts, accounting for vocabulary change has also become a concern in the design of information access systems for this rapidly growing body of texts. At the same time, as a result, large scale corpora are available allowing the testing of computational approaches for related tasks and providing quantitative support to proposals of various hypotheses. ∗ Box 200, SE-405 30 Gothenburg, Sweden, E-mail: [email protected] ∗∗ Box 200, SE-405 30 Gothenburg, Sweden, E-mail: [email protected] y Yoshida-honmachi, Sakyo-ku, Kyoto 606-8501 Japan, E-mail: [email protected] © 2018 Association for Computational Linguistics Computational Linguistics Volume 1, Number 1 Despite the recent increase in research using computational approaches to investigate lexical semantic changes, the community lacks a solid and extensive overview of this growing field. The aim of the present survey is to fill this gap. While we were preparing this survey article, two related surveys appeared, illustrating the timeliness of the topic. The survey by Kutuzov et al.(2018) has a narrower scope, focusing entirely on diachronic word embeddings. The broader survey presented by Tang(2018) covers much of the same field as ours in terms of computational linguistics work, but provides considerably less discussion of the connections and relevance of this work to linguistic research. A clear aim in preparing our presentation has been to anchor it firmly in mainstream historical linguistics and lexical typology, the two linguistics subdisciplines most relevant to our survey. Further, the application of computational methods to the study of language change has gained popularity in recent years. Relevant work can be found not only in traditional linguistics venues, but can be found in journals and conference proceedings representing a surprising variety of disciplines, even outside the humanities and social sciences. Consequently, another central aim of this survey has been to provide pointers into this body of research, which often utilizes datasets and applies methods originating in computational linguistics research. Finally, our main concern here is with computational linguistic studies of vocabulary change utilizing empirical diachronic (corpus) data. We have not attempted to survey a notable and relevant complementary strand of computational work aiming to simulate historical processes in language, including lexical change (see Baker 2008 for an overview). The work surveyed here falls into two broad categories. One is the modeling and study of diachronic conceptual change (i.e., how the meanings of words change in a language over shorter or longer time spans). This strand of computational linguistic research is closely connected to corresponding efforts in linguistics, often referring to them and suggesting new insights based on large-scale computational studies, (e.g., in the form of “laws of semantic change”). This work is surveyed in Section3, and split into one section on word-level change in Section 3.1, and one on sense-differentiated change in Section 3.2. The word-level change detection considers both count-based context methods as well as those based on neural embeddings, while the sense- differentiated change covers topic modeling-based methods, clustering-based, and word sense induction-based methods. The other strand of work focuses on lexical replacement, where different words express the same meaning over time. This is not traditionally a specific field in linguistics, but it presents obvious complications for access to historical text archives, where relevant information may be retrievable only through an obsolete label for an entity or phenomenon. This body of work is presented in Section4. The terminology and conceptual apparatus used in works on lexical semantic change are multifarious and not consistent over different fields or often even within the same discipline. For this reason, we provide a brief background description of relevant linguistic work in Section5. Much current work in computational linguistics depends crucially on (formal, automatic, quantitative, and reproducible) evaluation. Given the different aims of the surveyed research, evaluation procedures will look correspondingly different. We devote Section6 to a discussion of general methodological issues and evaluation. A characteristic feature of computational linguistics, is the close connection to concrete computer applications. In Section7 we describe a range of relevant applications presented in the literature, both more research-oriented (e.g., visualizations of vocabulary change) and down- stream applications, typically information access systems. We end with a summary of the main points garnered from our literature survey, and provide a conclusion and some recommendations for future work (Section8). We believe our survey can be helpful for both researchers already working on related topics as well as for those new to this field, for example, for PhD candidates who wish to quickly grasp the recent advances in the field and pinpoint promising research opportunities and directions. 2 Nina Tahmasebi et al. Survey of Computational Approaches to Lexical Semantic Change 2. Conceptual Framework and Application 2.1 Model and Classification of Language Change Lexical change can be seen as a special case of lexical variation, which can be attributable to many different linguistic and extralinguistic factors. In other words, we see the task of establishing that we are dealing with variants of the same item (in some relevant sense) – items of form or content – as logically separate from establishing that the variation is classifiable as lexical change. However, in language, form and function are always interdependent (this assumption de- fines the core of the Saussurean linguistic sign), so in actual practice, we cannot recognize linguistic forms in the absence of their meanings and vice versa. This principle is not invalidated by the fact that many orthographies provide (partial) analyses of the form–meaning complexes, in the form of word spacing or different word initial, final and medial shapes of letters, etc. There are still many cases where orthographic conventions do not provide enough clues, a notable example being multiword expressions, and in such cases tokenization simply does not provide all and only the lexical forms present in the text (Dridan and Oepen 2012). Investigation of lexical change is further complicated by the fact that – as just noted – observed variation in lexical form between different text materials need not be due to diachronic causes at all, even if the materials happen to be from different time periods. Linguists are well aware that even seen as a synchronic entity, language is full of variation at all linguistic levels. In spoken language, this kind of variation
Recommended publications
  • Corpora: Google Ngram Viewer and the Corpus of Historical American English
    EuroAmerican Journal of Applied Linguistics and Languages E JournALL Volume 1, Issue 1, November 2014, pages 48 68 ISSN 2376 905X DOI - - www.e journall.org- http://dx.doi.org/10.21283/2376905X.1.4 - Exploring mega-corpora: Google Ngram Viewer and the Corpus of Historical American English ERIC FRIGINALa1, MARSHA WALKERb, JANET BETH RANDALLc aDepartment of Applied Linguistics and ESL, Georgia State University bLanguage Institute, Georgia Institute of Technology cAmerican Language Institute, New York University, Tokyo Received 10 December 2013; received in revised form 17 May 2014; accepted 8 August 2014 ABSTRACT EN The creation of internet-based mega-corpora such as the Corpus of Contemporary American English (COCA), the Corpus of Historical American English (COHA) (Davies, 2011a) and the Google Ngram Viewer (Cohen, 2010) signals a new phase in corpus-based research that provides both novice and expert researchers immediate access to a variety of online texts and time-coded data. This paper explores the applications of these corpora in the analysis of academic word lists, in particular, Coxhead’s (2000) Academic Word List (AWL). Coxhead (2011) has called for further research on the AWL with larger corpora, noting that learners’ use of academic vocabulary needs to address for the AWL to be useful in various contexts. Results show that words on the AWL are declining in overall frequency from 1990 to the present. Implications about the AWL and future directions in corpus-based research utilizing mega-corpora are discussed. Keywords: GOOGLE N-GRAM VIEWER, CORPUS OF HISTORICAL AMERICAN ENGLISH, MEGA-CORPORA, TREND STUDIES. ES La creación de megacorpus basados en Internet, tales como el Corpus of Contemporary American English (COCA), el Corpus of Historical American English (COHA) (Davies, 2011a) y el Visor de Ngramas de Google (Cohen, 2010), anuncian una nueva fase en la investigación basada en corpus, pues proporcionan, tanto a investigadores noveles como a expertos, un acceso inmediato a una gran diversidad de textos online y datos codificados con time-code.
    [Show full text]
  • BEYOND JEWISH IDENTITY Rethinking Concepts and Imagining Alternatives
    This book is subject to a CC-BY-NC license. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc/4.0/ BEYOND JEWISH IDENTITY Rethinking Concepts and Imagining Alternatives This book is subject to a CC-BY-NC license. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc/4.0/ This book is subject to a CC-BY-NC license. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc/4.0/ BEYOND JEWISH IDENTITY rethinking concepts and imagining alternatives Edited by JON A. LEVISOHN and ARI Y. KELMAN BOSTON 2019 This book is subject to a CC-BY-NC license. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc/4.0/ Library of Congress Control Number:2019943604 The research for this book and its publication were made possible by the generous support of the Jack, Joseph and Morton Mandel Center for Studies in Jewish Education, a partnership between Brandeis University and the Jack, Joseph and Morton Mandel Foundation of Cleveland, Ohio. © Academic Studies Press, 2019 ISBN 978-1-644691-16-8 (Hardcover) ISBN 978-1-644691-29-8 (Paperback) ISBN 978-1-644691-17-5 (Open Access PDF) Book design by Kryon Publishing Services (P) Ltd. www.kryonpublishing.com Cover design by Ivan Grave Published by Academic Studies Press 1577 Beacon Street Brookline, MA 02446, USA [email protected] www.academicstudiespress.com Effective May 26th 2020, this book is subject to a CC-BY-NC license. To view a copy of this license, visit https://creativecommons.org/licenses/ by-nc/4.0/.
    [Show full text]
  • CS15-319 / 15-619 Cloud Computing
    CS15-319 / 15-619 Cloud Computing Recitation 13 th th November 18 and 20 , 2014 Announcements • Encounter a general bug: – Post on Piazza • Encounter a grading bug: – Post Privately on Piazza • Don’t ask if my answer is correct • Don’t post code on Piazza • Search before posting • Post feedback on OLI Last Week’s Project Reflection • Provision your own Hadoop cluster • Write a MapReduce program to construct inverted lists for the Project Gutenberg data • Run your code from the master instance • Piazza Highlights – Different versions of Hadoop API: Both old and new should be fine as long as your program is consistent Module to Read • UNIT 5: Distributed Programming and Analytics Engines for the Cloud – Module 16: Introduction to Distributed Programming for the Cloud – Module 17: Distributed Analytics Engines for the Cloud: MapReduce – Module 18: Distributed Analytics Engines for the Cloud: Pregel – Module 19: Distributed Analytics Engines for the Cloud: GraphLab Project 4 • MapReduce – Hadoop MapReduce • Input Text Predictor: NGram Generation – NGram Generation • Input Text Predictor: Language Model and User Interface – Language Model Generation Input Text Predictor • Suggest words based on letters already typed n-gram • An n-gram is a phrase with n contiguous words Google-Ngram Viewer • The result seems logical: the singular “is” becomes the dominant verb after the American Civil War. Google-Ngram Viewer • “one nation under God” and “one nation indivisible.” • “under God” was signed into law by President Eisenhower in 1954. How to Construct an Input Text Predictor? 1. Given a language corpus – Project Gutenberg (2.5 GB) – English Language Wikipedia Articles (30 GB) 2.
    [Show full text]
  • Internet and Data
    Internet and Data Internet and Data Resources and Risks and Power Kenneth W. Regan CSE199, Fall 2017 Internet and Data Outline Week 1 of 2: Data and the Internet What is data exactly? How much is there? How is it growing? Where data resides|in reality and virtuality. The Cloud. The Farm. How data may be accessed. Importance of structure and markup. Structures that help algorithms \crunch" data. Formats and protocols for enabling access to data. Protocols for controlling access and changes to data. SQL: Select. Insert. Update. Delete. Create. Drop. Dangers to privacy. Dangers of crime. (Dis-)Advantages of online data. [Week 1 Activity: Trying some SQL queries.] Internet and Data What Exactly Is \Data"? Several different aspects and definitions: 1 The entire track record of (your) online activity. Note that any \real data" put online was part of online usage. Exception could be burning CD/DVDs and other hard media onto a server, but nowadays dwarfed by uploads. So this is the most inclusive and expansive definition. Certainly what your carrier means by \data"|if you re-upload a file, it counts twice. 2 Structured information for a particular context or purpose. What most people mean by \data." Data repositories often specify the context and form. Structure embodied in formats and access protocols. 3 In-between is what's commonly called \Unstructured Information" Puts the M in Data Mining. Hottest focus of consent, rights, and privacy issues. Internet and Data How Much Data Is There? That is, How Big Is the Internet? Searchable Web Deep Web (I maintain several gigabytes of deep-web textual data.
    [Show full text]
  • Introduction to Text Analysis: a Coursebook
    Table of Contents 1. Preface 1.1 2. Acknowledgements 1.2 3. Introduction 1.3 1. For Instructors 1.3.1 2. For Students 1.3.2 3. Schedule 1.3.3 4. Issues in Digital Text Analysis 1.4 1. Why Read with a Computer? 1.4.1 2. Google NGram Viewer 1.4.2 3. Exercises 1.4.3 5. Close Reading 1.5 1. Close Reading and Sources 1.5.1 2. Prism Part One 1.5.2 3. Exercises 1.5.3 6. Crowdsourcing 1.6 1. Crowdsourcing 1.6.1 2. Prism Part Two 1.6.2 3. Exercises 1.6.3 7. Digital Archives 1.7 1. Text Encoding Initiative 1.7.1 2. NINES and Digital Archives 1.7.2 3. Exercises 1.7.3 8. Data Cleaning 1.8 1. Problems with Data 1.8.1 2. Zotero 1.8.2 3. Exercises 1.8.3 9. Cyborg Readers 1.9 1. How Computers Read Texts 1.9.1 2. Voyant Part One 1.9.2 3. Exercises 1.9.3 10. Reading at Scale 1.10 1. Distant Reading 1.10.1 2. Voyant Part Two 1.10.2 3. Exercises 1.10.3 11. Topic Modeling 1.11 1. Bags of Words 1.11.1 2. Topic Modeling Case Study 1.11.2 3. Exercises 1.11.3 12. Classifiers 1.12 1. Supervised Classifiers 1.12.1 2. Classifying Texts 1.12.2 3. Exercises 1.12.3 13. Sentiment Analysis 1.13 1. Sentiment Analysis 1.13.1 2.
    [Show full text]
  • Google Ngram Viewer Turns Snippets Into Insight Computers Have
    PART I Shredding the Great Books - or - Google Ngram Viewer Turns Snippets into Insight Computers have the ability to store enormous amounts of information. But while it may seem good to have as big a pile of information as possible, as the pile gets bigger, it becomes increasingly difficult to find any particular item. In the old days, helping people find information was the job of phone books, indexes in books, catalogs, card catalogs, and especially librarians. Goals for this Lecture 1. Investigate the history of digitizing text; 2. Understand the goal of Google Books; 3. Describe logistical, legal, and software problems of Google Books; 4. To see some of the problems that arise when old texts are used. Computers Are Reshaping Our World This class looks at ways in which computers have begun to change the way we live. Many recent discoveries and inventions can be attributed to the power of computer hardware (the machines), the development of computer programs (the instructions we give the machines), and more importantly to the ability to think about problems in new ways that make it possible to solve them with computers, something we call computational thinking. Issues in Computational Thinking What problems can computers help us with? What is different about how a computer seeks a solution? Why are computer solutions often not quite the same as what a human would have found? How do computer programmers write out instructions that tell a computer how to solve a problem, what problem to solve, and how to report the answer? How does a computer \think"? How does it receive instructions, store infor- mation in a memory, represent and manipulate the numbers and words and pictures and ideas that we refer to as we think about a problem? Computers are Part of a Complicated World We will see that making a computer carry out a task in a few seconds actually can take years of thinking, research, and experiment.
    [Show full text]
  • Apache Hadoop & Spark – What Is It ?
    Hadoop { Spark Overview J. Allemandou Generalities Hadoop Spark Apache Hadoop & Spark { What is it ? Demo Joseph Allemandou JoalTech 17 / 05 / 2018 1 / 23 Hello! Hadoop { Spark Overview Joseph Allemandou J. Allemandou Generalities Hadoop Spark Demo 2 / 23 Plan Hadoop { Spark Overview J. Allemandou Generalities Hadoop Spark 1 Generalities on High Performance Computing (HPC) Demo 2 Apache Hadoop and Spark { A Glimpse 3 Demonstration 3 / 23 More computation power: Scale up vs. scale out Hadoop { Spark Overview J. Allemandou Scale Up Scale out Generalities Hadoop Spark Demo 4 / 23 More computation power: Scale up vs. scale out Hadoop { Spark Overview Scale out J. Allemandou Scale Up Generalities Hadoop Spark Demo 4 / 23 Parallel computing Hadoop { Spark Overview Things to consider when doing parallel computing: J. Allemandou Partitioning (tasks, data) Generalities Hadoop Spark Communications Demo Synchronization Data dependencies Load balancing Granularity IO Livermore Computing Center - Tutorial 5 / 23 Looking back - Since 1950 Hadoop { Spark Overview J. Allemandou Generalities Hadoop Spark Demo Figure: Google Ngram Viewer 6 / 23 Looking back - Since 1950 Hadoop { Spark Overview J. Allemandou Generalities Hadoop Spark Demo Figure: Google Ngram Viewer 6 / 23 Looking back - Since 1950 Hadoop { Spark Overview J. Allemandou Generalities Hadoop Spark Demo Figure: Google Ngram Viewer 6 / 23 Looking back - Recent times Hadoop { Spark Overview J. Allemandou Generalities Hadoop Spark Demo Figure: Google Trends 7 / 23 Same problem, different tools Hadoop { Spark Overview J. Allemandou Generalities Supercomputer Big Data Hadoop Spark Demo Dedicated hardware Commodity hardware Message Passing Interface 8 / 23 MPI Hadoop { Spark Overview J. Allemandou Generalities C / C++ / Fortran / Python Hadoop Spark Demo Low-level API - Send / receive messages a lot to do manually split the data assign tasks to workers handle synchronisation handle errors 9 / 23 Hadoop + Spark Hadoop { Spark Overview J.
    [Show full text]
  • Conceptboard Совместное Редактирование, Виртуальная Д
    1. Виртуальные доски: Conceptboard совместное редактирование, виртуальная доска DabbleBoard совместное редактирование без регистрации Educreations виртуальная доска, классы, категории, мультимедиа (онлайн и как приложение к iPad) FlockDraw - cовместное рисование и работа с виртуальной доской Notaland виртуальная доска для групповой работы - интеграция различного контента Primary Paint виртуальная доска для совместной работы без регистрации Popplet - виртуальная стена (доска) для работы с мультимедиа объектами в группе RealtimeBoard виртуальные доски для проектов на русском языке Rizzoma - виртуальная площадка для коллективной работы Scriblink виртуальная доска Scrumlr виртуальная доска со стикерами. Групповая работа Twiddla виртуальная интерактивная доска Vyew сервис совещаний, обучения, виртуальная доска Writeboard совместное редактирование WikiWall работа в группе с информацией 2. Графика онлайн (редакторы, хостинг, анимация, коллажи): artPad -онлайн рисовалка Aviary фоторедактор редактируем фото Aviary Phoenix Совместное редактирование рисунков BannerSnack создание банеров Befunky -редактируем фото, создаем коллаж Blingee создание анимированных изображений Caption.iT создание коллажей с использованием множества шаблонов Clay Yourself создание аватарок ( портретов) из наборов графических примитивов CreateCollage - сервис русский для быстрого создания коллажей из 2-6 фотографий. Doink -мультипликация Dermandar - фотопанорамы DisaPainted создание анимации в сети Drav.to - рисование онлайн Draw a Shtickman анимация, рисуем на сервисе Drawball
    [Show full text]
  • Google Ngram Viewer by Andrew Weiss
    Google Ngram Viewer By Andrew Weiss Digital Services Librarian California State University, Northridge Introduction: Google, mass digitization and the emerging science of culturonomics Google Books famously launched in 2004 with great fanfare and controversy. Two stated original goals for this ambitious digitization project were to replace the generic library card catalog with their new “virtual” one and to scan all the books that have ever been published, which by a Google employee’s reckoning is approximately 129 million books. (Google 2014) (Jackson 2010) Now ten years in, the Google Books mass-digitization project as of May 2014 has claimed to have scanned over 30 million books, a little less than 24% of their overt goal. Considering the enormity of their vision for creating this new type of massive digital library their progress has been astounding. However, controversies have also dominated the headlines. At the time of the Google announcement in 2004 Jean Noel Jeanneney, head of Bibliothèque nationale de France at the time, famously called out Google’s strategy as excessively Anglo-American and dangerously corporate-centric. (Jeanneney 2007) Entrusting a corporation to the digitization and preservation of a diverse world-wide corpus of books, Jeanneney argues, is a dangerous game, especially when ascribing a bottom-line economic equivalent figure to cultural works of inestimable personal, national and international value. Over the ten years since his objections were raised, many of his comments remain prescient and relevant to the problems and issues the project currently faces. Google Books still faces controversies related to copyright, as seen in the current appeal of the Author’s Guild v.
    [Show full text]
  • Smartness Mandate Above, Smartness Is Both a Reality and an Imaginary, and It Is This Comingling That Underwrites Both Its Logic and the Magic of Its Popularity
    Forthcoming Grey Room ©Orit Halpern and Robert Mitchell 2017/please do not cite without permission the other hand, though, it is impossible to deny not only the agency and transformative capacities of the smart technical systems, but also the deep appeal of this approach to managing an extraordinarily complex and ecologically fragile world. None of us is eager to abandon our cell phones or computers. Moreover, the epistemology of partial truths, incomplete perspectives and uncertainty with which C.S. Holling sought to critique capitalist understandings of environments and ecologies still holds a weak messianic potential for revising older modern forms of knowledge, and for building new forms of affiliation, agency and politics grounded in uncertainty, rather than objectivity and surety, and in this way keeping us open to plural forms of life and thought. However, insofar as smartness separates critique from conscious, collective human reflection—that is, insofar as smartness seeks to steer communities algorithmically, in registers operating below consciousness and human discourse—critiquing smartness is in part a matter of excavating and rethinking each of its central concepts and practices (zones, populations, optimization, and resilience), and the temporal logic that emerges from the particular way in which smartness combines these concepts and practices. 36 Forthcoming Grey Room ©Orit Halpern and Robert Mitchell 2017/please do not cite without permission perpetual and unending evaluation through a continuous mode of self-referential data collection; and for the construction of forms of financial instrumentation and accounting that no longer engage, or even need to engage with, alienate, or translate, what capital extracts from history, geology, or life.
    [Show full text]
  • Google N-Gram Viewer Does Not Include Arabic Corpus! Towards N-Gram Viewer for Arabic Corpus
    The International Arab Journal of Information Technology, Vol. 15, No. 5, September 2018 785 Google N-Gram Viewer does not Include Arabic Corpus! Towards N-Gram Viewer for Arabic Corpus Izzat Alsmadi1 and Mohammad Zarour2 1Department of Computing and Cyber Security, Texas A&M University, USA 2Information Systems Department, Prince Sultan University, KSA Abstract: Google N-gram viewer is one of those newly published Google services. Google archived or digitized a large number of books in different languages. Google populated the corpora from over 5 million books published up to 2008. This Google service allows users to enter queries of words. The tool then charts time-based data that show the frequency of usage of query words. Although Arabic is one of the top spoken language in the world, Arabic language is not included as one of the corpora indexed by the Google n-gram viewer. This research work discusses the development of large Arabic corpus and indexing it using N-grams to be included in Google N-gram viewer. A showcase is presented to build a dataset to initiate the process of digitizing the Arabic content and prepare it to be incorporated in Google N-gram viewer. One of the major goals of including Arabic content in Google N-gram is to enrich Arabic public content, which has been very limited in comparison with the number of people who speak Arabic. We believe that adopting Arabic language by Google N-gram viewer can significantly benefit researchers in different fields related to Arabic language and social sciences. Keywords: Arabic language processing, corpus, google N-gram viewer.
    [Show full text]
  • Science in the Forest, Science in the Past Hbooksau
    SCIENCE IN THE FOREST, SCIENCE IN THE PAST HBooksau Director Anne-Christine Taylor Editorial Collective Hylton White Catherine V. Howard Managing Editor Nanette Norris Editorial Staff Michelle Beckett Jane Sabherwal Hau Books are published by the Society for Ethnographic Theory (SET) SET Board of Directors Kriti Kapila (Chair) John Borneman Carlos Londoño Sulkin Anne-Christine Taylor www.haubooks.org SCIENCE IN THE FOREST, SCIENCE IN THE PAST Edited by Geoffrey E. R. Lloyd and Aparecida Vilaça Hau Books Chicago © 2020 Hau Books Originally published as a special issue of HAU: Journal of Ethnographic Theory 9 (1): 36–182. © 2019 Society for Ethnographic Theory Science in the Forest, Science in the Past, edited by Geoffrey E. R. Lloyd and Aparecida Vilaça, is licensed under CC BY-NC-ND 4.0 https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode Cover photo: Carlos Fausto. Used with permission. Cover design: Daniele Meucci and Ania Zayco Layout design: Deepak Sharma, Prepress Plus Typesetting: Prepress Plus (www.prepressplus.in) ISBN: 978-1-912808-41-0 [paperback] ISBN: 978-1-912808-79-3 [ebook] ISBN: 978-1-912808-42-7 [PDF] LCCN: 2020950467 Hau Books Chicago Distribution Center 11030 S. Langley Chicago, Il 60628 www.haubooks.org Publications of Hau Books are printed, marketed, and distributed by The University of Chicago Press. www.press.uchicago.edu Printed in the United States of America on acid-free paper. Contents List of Figures vii Preface viii Geoffrey E. R. Lloyd and Aparecida Vilaça Acknowledgments xii Chapter 1. The Clash of Ontologies and the Problems of Translation and Mutual Intelligibility 1 Geoffrey E.
    [Show full text]