Web Text Corpus for Natural Language Processing

Total Page:16

File Type:pdf, Size:1020Kb

Web Text Corpus for Natural Language Processing Web Text Corpus for Natural Language Processing Vinci Liu and James R. Curran School of Information Technologies University of Sydney NSW 2006, Australia {vinci,james}@it.usyd.edu.au Abstract To date, most NLP tasks that have utilised web Web text has been successfully used as data have accessed it through search engines, us- training data for many NLP applications. ing only the hit counts or examining a limited While most previous work accesses web number of results pages. The tasks are reduced text through search engine hit counts, we to determining n-gram probabilities which are created a Web Corpus by downloading then estimated by hit counts from search engine web pages to create a topic-diverse collec- queries. This method only gathers information tion of 10 billion words of English. We from the hit counts but does not require the com- show that for context-sensitive spelling putationally expensive downloading of actual text correction the Web Corpus results are bet- for analysis. Unfortunately search engines were ter than using a search engine. For the- not designed for NLP research and the reported hit saurus extraction, it achieved similar over- counts are subject to uncontrolled variations and all results to a corpus of newspaper text. approximations (Nakov and Hearst, 2005). Volk With many more words available on the (2002) proposed a linguistic search engine to ex- web, better results can be obtained by col- tract word relationships more accurately. lecting much larger web corpora. We created a 10 billion word topic-diverse Web Corpus by spidering websites from a set of seed 1 Introduction URLs. The seed set is selected from the Open Traditional written corpora for linguistics research Directory to ensure that a diverse range of top- are created primarily from printed text, such as ics is included in the corpus. A process of text newspaper articles and books. With the growth of cleaning transforms the HTML text into a form the World Wide Web as an information resource, it useable by most NLP systems – tokenised words, is increasingly being used as training data in Nat- one sentence per line. Text filtering removes un- ural Language Processing (NLP) tasks. wanted text from the corpus, such as non-English There are many advantages to creating a corpus sentences and most lines of text that are not gram- from web data rather than printed text. All web matical sentences. We compare the vocabulary of data is already in electronic form and therefore the Web Corpus with newswire. readable by computers, whereas not all printed Our Web Corpus is evaluated on two NLP tasks. data is available electronically. The vast amount Context-sensitive spelling correction is a disam- of text available on the web is a major advantage, biguation problem, where the correction word in a with Keller and Lapata (2003) estimating that over confusion set (e.g. {their, they’re}) needs to be se- 98 billion words were indexed by Google in 2003. lected for a given context. Thesaurus extraction is The performance of NLP systems tends to im- a similarity task, where synonyms of a target word prove with increasing amount of training data. are extracted from a corpus of unlabelled text. Our Banko and Brill (2001) showed that for context- evaluation demonstrates that web text can be used sensitive spelling correction, increasing the train- for the same tasks as search engine hit counts and ing data size increases the accuracy, for up to 1 newspaper text. However, there is a much larger billion words in their experiments. quantity of freely available web text to exploit. 233 2 Existing Web Corpora ing this information from the BNC presented no difficulty, making so many queries to the Altavista The web has become an indispensible resource was too time-consuming. They had to reduce the with a vast amount of information available. Many size of the test set to obtain a result. NLP tasks have successfully utilised web data, in- Lapata and Keller (2005) performed a wide cluding machine translation (Grefenstette, 1999), range of NLP tasks using web data by querying prepositional phrase attachment (Volk, 2001), and Altavista and Google. This included variety of other-anaphora resolution (Modjeska et al., 2003). generation tasks (e.g. machine translation candi- 2.1 Search Engine Hit Counts date selection) and analysis tasks (e.g. preposi- tional phrase attachment, countability detection). Most NLP systems that have used the web access They showed that while web counts usually out- it via search engines such as Altavista and Google. performed BNC counts and consistently outper- N-gram counts are approximated by literal queries formed the baseline, the best performing system w w “ 1 ... n”. Relations between two words are is usually a supervised method trained on anno- approximated in Altavista by the NEAR operator tated data. Keller and Lapata concluded that hav- (which locates word pairs within 10 tokens of each ing access linguistic information (accurate n-gram other). The overall coverage of the queries can counts, POS tags, and parses) outperforms using a be expanded by morphological expansion of the large amount of web data. search terms. Keller and Lapata (2003) demonstrated a high 2.2 Spidered Web Corpora degree of correlation between n-gram estimates A few projects have utilised data downloaded from from search engine hit counts and n-gram frequen- the web. Ravichandran et al. (2005) used a col- cies obtained from traditional corpora such as the lection of 31 million web pages to produce noun British National Corpus (BNC). The hit counts similarity lists. They found that most NLP algo- also had a higher correlation to human plausibil- rithms are unable to run on web scale data, espe- ity judgements than the BNC counts. cially those with quadratic running time. Halacsy The web count method contrasts with tradi- et al. (2004) created a Hungarian corpus from the tional methods where the frequencies are obtained web by downloading text from the .hu domain. from a corpus of locally available text. While the From a 18 million page crawl of the web a 1 bil- corpus is much smaller than the web, an accu- lion word corpus is created (removing duplicates rate count and further text processing is possible and non-Hungarian text). because all of the contexts are readily accessible. A terabyte-sized corpus of the web was col- The web count method obtains only an approxi- lected at the University of Waterloo in 2001. A mate number of matches on the web, with no con- breadth first search from a seed set of university trol over which pages are indexed by the search home pages yielded over 53 billion words, requir- engines and with no further analysis possible. ing 960GB of storage. Clarke et al. (2002) and There are a number of limitations in the search Terra and Clarke (2003) used this corpus for their engine approximations. As many search engines question answering system. They obtained in- discard punctuation information (especially when creasing performance with increasing corpus size using the NEAR operator), words considered ad- but began reaching asymptotic behaviour at the jacent to each other could actually lie in differ- 300-500GB range. ent sentences or paragraphs. For example in Volk (2001), the system assumes that a preposition at- 3 Creating the Web Corpus taches to a noun simply when the noun appears within a fixed context window of the preposition. There are many challenges in creating a web cor- The preposition and noun could in fact be related pus, as the World Wide Web is unstructured and differently or be in different sentences altogether. without a definitive directory. No simple method The speed of querying search engines is another exists to collect a large representative sample of concern. Keller and Lapata (2003) needed to ob- the web. Two main approaches exist for collect- tain the frequency counts of 26,271 test adjective ing representative web samples – IP address sam- pairs from the web and from the BNC for the task pling and random walks. The IP address sam- of prenominal adjective ordering. While extract- pling technique randomly generates IP addresses 234 and explores any websites found (Lawrence and External links encountered during this process are Giles, 1999). This method requires substantial re- added to the link collection of the topic node re- sources as many attempts are made for each web- gardless of the actual topic of the link. Although site found. Lawrence and Giles reported that 1 in websites of one topic tends to link to other web- 269 tries found a web server. sites of the same topic, this process contributes to Random walk techniques attempt to simulate a a topic drift. As the spider traverses away from regular undirected web graph (Henzinger et al., the original seed URLs, we are less certain of the 2000). In such a graph, a random walk would pro- topic included in the collection. duce a uniform sample of the nodes (i.e. the web pages). However, only an approximation of such a 3.2 Text Cleaning graph is possible, as the web is directed (i.e. you Text cleaning is the term we used to describe the cannot easily determine all web pages linking to overall process of converting raw HTML found on a particular page). Most implementations of ran- the web into a form useable by NLP algorithms dom walks approximates the number of backward – white space delimited words, separated into one links by using information from search engines. sentence per line. It consists of many low-level 3.1 Web Spidering processes which are often accomplished by sim- ple rule-based scripts.
Recommended publications
  • Construction of English-Bodo Parallel Text
    International Journal on Natural Language Computing (IJNLC) Vol.7, No.5, October 2018 CONSTRUCTION OF ENGLISH -BODO PARALLEL TEXT CORPUS FOR STATISTICAL MACHINE TRANSLATION Saiful Islam 1, Abhijit Paul 2, Bipul Syam Purkayastha 3 and Ismail Hussain 4 1,2,3 Department of Computer Science, Assam University, Silchar, India 4Department of Bodo, Gauhati University, Guwahati, India ABSTRACT Corpus is a large collection of homogeneous and authentic written texts (or speech) of a particular natural language which exists in machine readable form. The scope of the corpus is endless in Computational Linguistics and Natural Language Processing (NLP). Parallel corpus is a very useful resource for most of the applications of NLP, especially for Statistical Machine Translation (SMT). The SMT is the most popular approach of Machine Translation (MT) nowadays and it can produce high quality translation result based on huge amount of aligned parallel text corpora in both the source and target languages. Although Bodo is a recognized natural language of India and co-official languages of Assam, still the machine readable information of Bodo language is very low. Therefore, to expand the computerized information of the language, English to Bodo SMT system has been developed. But this paper mainly focuses on building English-Bodo parallel text corpora to implement the English to Bodo SMT system using Phrase-Based SMT approach. We have designed an E-BPTC (English-Bodo Parallel Text Corpus) creator tool and have been constructed General and Newspaper domains English-Bodo parallel text corpora. Finally, the quality of the constructed parallel text corpora has been tested using two evaluation techniques in the SMT system.
    [Show full text]
  • Usage of IT Terminology in Corpus Linguistics Mavlonova Mavluda Davurovna
    International Journal of Innovative Technology and Exploring Engineering (IJITEE) ISSN: 2278-3075, Volume-8, Issue-7S, May 2019 Usage of IT Terminology in Corpus Linguistics Mavlonova Mavluda Davurovna At this period corpora were used in semantics and related Abstract: Corpus linguistic will be the one of latest and fast field. At this period corpus linguistics was not commonly method for teaching and learning of different languages. used, no computers were used. Corpus linguistic history Proposed article can provide the corpus linguistic introduction were opposed by Noam Chomsky. Now a day’s modern and give the general overview about the linguistic team of corpus. corpus linguistic was formed from English work context and Article described about the corpus, novelties and corpus are associated in the following field. Proposed paper can focus on the now it is used in many other languages. Alongside was using the corpus linguistic in IT field. As from research it is corpus linguistic history and this is considered as obvious that corpora will be used as internet and primary data in methodology [Abdumanapovna, 2018]. A landmark is the field of IT. The method of text corpus is the digestive modern corpus linguistic which was publish by Henry approach which can drive the different set of rules to govern the Kucera. natural language from the text in the particular language, it can Corpus linguistic introduce new methods and techniques explore about languages that how it can inter-related with other. for more complete description of modern languages and Hence the corpus linguistic will be automatically drive from the these also provide opportunities to obtain new result with text sources.
    [Show full text]
  • Enriching an Explanatory Dictionary with Framenet and Propbank Corpus Examples
    Proceedings of eLex 2019 Enriching an Explanatory Dictionary with FrameNet and PropBank Corpus Examples Pēteris Paikens 1, Normunds Grūzītis 2, Laura Rituma 2, Gunta Nešpore 2, Viktors Lipskis 2, Lauma Pretkalniņa2, Andrejs Spektors 2 1 Latvian Information Agency LETA, Marijas street 2, Riga, Latvia 2 Institute of Mathematics and Computer Science, University of Latvia, Raina blvd. 29, Riga, Latvia E-mail: [email protected], [email protected] Abstract This paper describes ongoing work to extend an online dictionary of Latvian – Tezaurs.lv – with representative semantically annotated corpus examples according to the FrameNet and PropBank methodologies and word sense inventories. Tezaurs.lv is one of the largest open lexical resources for Latvian, combining information from more than 300 legacy dictionaries and other sources. The corpus examples are extracted from Latvian FrameNet and PropBank corpora, which are manually annotated parallel subsets of a balanced text corpus of contemporary Latvian. The proposed approach augments traditional lexicographic information with modern cross-lingually interpretable information and enables analysis of word senses from the perspective of frame semantics, which is substantially different from (complementary to) the traditional approach applied in Latvian lexicography. In cases where FrameNet and PropBank corpus evidence aligns well with the word sense split in legacy dictionaries, the frame-semantically annotated corpus examples supplement the word sense information with clarifying usage examples and commonly used semantic valence patterns. However, the annotated corpus examples often provide evidence of a different sense split, which is often more coarse-grained and, thus, may help dictionary users to cluster and comprehend a fine-grained sense split suggested by the legacy sources.
    [Show full text]
  • 20 Toward a Sustainable Handling of Interlinear-Glossed Text in Language Documentation
    Toward a Sustainable Handling of Interlinear-Glossed Text in Language Documentation JOHANN-MATTIS LIST, Max Planck Institute for the Science of Human History, Germany 20 NATHANIEL A. SIMS, The University of California, Santa Barbara, America ROBERT FORKEL, Max Planck Institute for the Science of Human History, Germany While the amount of digitally available data on the worlds’ languages is steadily increasing, with more and more languages being documented, only a small proportion of the language resources produced are sustain- able. Data reuse is often difficult due to idiosyncratic formats and a negligence of standards that couldhelp to increase the comparability of linguistic data. The sustainability problem is nicely reflected in the current practice of handling interlinear-glossed text, one of the crucial resources produced in language documen- tation. Although large collections of glossed texts have been produced so far, the current practice of data handling makes data reuse difficult. In order to address this problem, we propose a first framework forthe computer-assisted, sustainable handling of interlinear-glossed text resources. Building on recent standardiza- tion proposals for word lists and structural datasets, combined with state-of-the-art methods for automated sequence comparison in historical linguistics, we show how our workflow can be used to lift a collection of interlinear-glossed Qiang texts (an endangered language spoken in Sichuan, China), and how the lifted data can assist linguists in their research. CCS Concepts: • Applied computing → Language translation; Additional Key Words and Phrases: Sino-tibetan, interlinear-glossed text, computer-assisted language com- parison, standardization, qiang ACM Reference format: Johann-Mattis List, Nathaniel A.
    [Show full text]
  • Linguistic Annotation of the Digital Papyrological Corpus: Sematia
    Marja Vierros Linguistic Annotation of the Digital Papyrological Corpus: Sematia 1 Introduction: Why to annotate papyri linguistically? Linguists who study historical languages usually find the methods of corpus linguis- tics exceptionally helpful. When the intuitions of native speakers are lacking, as is the case for historical languages, the corpora provide researchers with materials that replaces the intuitions on which the researchers of modern languages can rely. Using large corpora and computers to count and retrieve information also provides empiri- cal back-up from actual language usage. In the case of ancient Greek, the corpus of literary texts (e.g. Thesaurus Linguae Graecae or the Greek and Roman Collection in the Perseus Digital Library) gives information on the Greek language as it was used in lyric poetry, epic, drama, and prose writing; all these literary genres had some artistic aims and therefore do not always describe language as it was used in normal commu- nication. Ancient written texts rarely reflect the everyday language use, let alone speech. However, the corpus of documentary papyri gets close. The writers of the pa- pyri vary between professionally trained scribes and some individuals who had only rudimentary writing skills. The text types also vary from official decrees and orders to small notes and receipts. What they have in common, though, is that they have been written for a specific, current need instead of trying to impress a specific audience. Documentary papyri represent everyday texts, utilitarian prose,1 and in that respect, they provide us a very valuable source of language actually used by common people in everyday circumstances.
    [Show full text]
  • Creating and Using Multilingual Corpora in Translation Studies Claudio Fantinuoli and Federico Zanettin
    Chapter 1 Creating and using multilingual corpora in translation studies Claudio Fantinuoli and Federico Zanettin 1 Introduction Corpus linguistics has become a major paradigm and research methodology in translation theory and practice, with practical applications ranging from pro- fessional human translation to machine (assisted) translation and terminology. Corpus-based theoretical and descriptive research has investigated written and interpreted language, and topics such as translation universals and norms, ideol- ogy and individual translator style (Laviosa 2002; Olohan 2004; Zanettin 2012), while corpus-based tools and methods have entered the curricula at translation training institutions (Zanettin, Bernardini & Stewart 2003; Beeby, Rodríguez Inés & Sánchez-Gijón 2009). At the same time, taking advantage of advancements in terms of computational power and increasing availability of electronic texts, enormous progress has been made in the last 20 years or so as regards the de- velopment of applications for professional translators and machine translation system users (Coehn 2009; Brunette 2013). The contributions to this volume, which are centred around seven European languages (Basque, Dutch, German, Greek, Italian, Spanish and English), add to the range of studies of corpus-based descriptive studies, and provide exam- ples of some less explored applications of corpus analysis methods to transla- tion research. The chapters, which are based on papers first presented atthe7th congress of the European Society of Translation Studies held in Germersheim in Claudio Fantinuoli & Federico Zanettin. 2015. Creating and using multilin- gual corpora in translation studies. In Claudio Fantinuoli & Federico Za- nettin (eds.), New directions in corpus-based translation studies, 1–11. Berlin: Language Science Press Claudio Fantinuoli and Federico Zanettin July/August 20131, encompass a variety of research aims and methodologies, and vary as concerns corpus design and compilation, and the techniques used to ana- lyze the data.
    [Show full text]
  • Corpus Linguistics As a Tool in Legal Interpretation Lawrence M
    BYU Law Review Volume 2017 | Issue 6 Article 5 August 2017 Corpus Linguistics as a Tool in Legal Interpretation Lawrence M. Solan Tammy Gales Follow this and additional works at: https://digitalcommons.law.byu.edu/lawreview Part of the Applied Linguistics Commons, Constitutional Law Commons, and the Legal Profession Commons Recommended Citation Lawrence M. Solan and Tammy Gales, Corpus Linguistics as a Tool in Legal Interpretation, 2017 BYU L. Rev. 1311 (2018). Available at: https://digitalcommons.law.byu.edu/lawreview/vol2017/iss6/5 This Article is brought to you for free and open access by the Brigham Young University Law Review at BYU Law Digital Commons. It has been accepted for inclusion in BYU Law Review by an authorized editor of BYU Law Digital Commons. For more information, please contact [email protected]. 2.GALESSOLAN_FIN.NO HEADERS.DOCX (DO NOT DELETE) 4/26/2018 3:54 PM Corpus Linguistics as a Tool in Legal Interpretation Lawrence M. Solan* & Tammy Gales** In this paper, we set out to explore conditions in which the use of large linguistic corpora can be optimally employed by judges and others tasked with construing authoritative legal documents. Linguistic corpora, sometimes containing billions of words, are a source of information about the distribution of language usage. Thus, corpora and the tools for using them are most likely to assist in addressing legal issues when the law considers the distribution of language usage to be legally relevant. As Thomas R. Lee and Stephen C. Mouritsen have so ably demonstrated in earlier work, corpus analysis is especially helpful when the legal standard for construction is the ordinary meaning of the document’s terms.
    [Show full text]
  • Text and Corpus Analysis: Computer-Assisted Studies of Language and Culture
    © Copyright Michael Stubbs 1996. This is chapter 1 of Text and Corpus Analysis which was published by Blackwell in 1996. TEXT AND CORPUS ANALYSIS: COMPUTER-ASSISTED STUDIES OF LANGUAGE AND CULTURE MICHAEL STUBBS CHAPTER 1. TEXTS AND TEXT TYPES Attested language text duly recorded is in the focus of attention for the linguist. (Firth 1957b: 29.) The main question discussed in this book is as follows: How can an analysis of the patterns of words and grammar in a text contribute to an understanding of the meaning of the text? I discuss traditions of text analysis in (mainly) British linguistics, and computer-assisted methods of text and corpus analysis. And I provide analyses of several shorter and longer texts, and also of patterns of language across millions of words of text corpora. The main emphasis is on the analysis of attested, naturally occurring textual data. An exclusive concentration on the text alone is, however, not an adequate basis for text interpretation, and this introductory chapter discusses the appropriate balance for text analysis, between the text itself and its production and reception. Such analyses of attested texts are important for both practical and theoretical reasons, and the main reasons can be easily and briefly stated. First, there are many areas of the everyday world in which the systematic and critical interpretation of texts is an important social skill. One striking example occurs in courtrooms, where very important decisions, by juries as well as lawyers, can depend on such interpretations. Second, much of theoretical linguistics over the past fifty years has been based on the study of isolated sentences.
    [Show full text]
  • Latvian Framenet: Cross-Lingual Issues
    96 Human Language Technologies – The Baltic Perspective K. Muischnek and K. Müürisep (Eds.) © 2018 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0). doi:10.3233/978-1-61499-912-6-96 Latvian FrameNet: Cross-Lingual Issues Gunta NESPORE-Bˇ ERZKALNE¯ 1, Baiba SAULITE¯ and Normunds GRUZ¯ ITIS¯ Institute of Mathematics and Computer Science, University of Latvia Abstract. This paper reports the lessons learned while creating a FrameNet- annotated text corpus of Latvian. This is still an ongoing work, a part of a larger project which aims at the creation of a multilayer text corpus, anchored in cross- lingual state-of-the-art representations: Universal Dependencies (UD), FrameNet and PropBank, as well as Abstract Meaning Representation (AMR). For the FrameNet layer, we use the latest frame inventory of Berkeley FrameNet (BFN v1.7), while the annotation itself is done on top of the underlying UD layer. We strictly follow a corpus-driven approach, meaning that lexical units (LU) in Latvian FrameNet are created only based on the annotated corpus examples. Since we are aiming at a medium-sized still general-purpose corpus, an important aspect that we take into account is the variety and balance of the corpus in terms of genres, do- mains and LUs. We have finished the first phase of the FrameNet corpus annota- tion, and we have collected and discuss cross-lingual issues and their possible so- lutions. The issues are relevant for other languages as well, particularly if the goal is to maintain cross-lingual compatibility via BFN.
    [Show full text]
  • Against Corpus Linguistics
    Against Corpus Linguistics JOHN S. EHRETT* Corpus linguistics—the use of large, computerized word databases as tools for discovering linguistic meaning—has increasingly become a topic of interest among scholars of constitutional and statutory interpretation. Some judges and academics have recently argued, across the pages of multiple law journals, that members of the judiciary ought to employ these new technologies when seeking to ascertain the original public meaning of a given text. Corpus linguistics, in the minds of its proponents, is a powerful instrument for rendering constitutional originalism and statutory textualism “scientific” and warding off accusations of interpretive subjectivity. This Article takes the opposite view: on balance, judges should refrain from the use of corpora. Although corpus linguistics analysis may appear highly promising, it carries with it several under-examined dangers—including the collapse of essential distinctions between resource quality, the entrenchment of covert linguistic biases, and a loss of reviewability by higher courts. TABLE OF CONTENTS INTRODUCTION……………………………………………………..…….51 I. THE RISE OF CORPUS LINGUISTICS……………..……………..……54 A. WHAT IS CORPUS LINGUISTICS? …………………………………54 1. Frequency……………………………………………...…54 2. Collocation……………………………………………….55 3. Keywords in Context (KWIC) …………………………...55 B. CORPUS LINGUISTICS IN THE COURTS……………………………56 1. United States v. Costello…………………………………..56 2. State v. Canton……………………………………………58 3. State v. Rasabout………………………………………….59 II. AGAINST “JUDICIALIZING”C ORPUS LINGUISTICS………………….61 A. SUBVERSION OF SOURCE AUTHORITY HIERARCHIES……………..61 * Yale Law School, J.D. 2017. © 2019, John S. Ehrett. 51 THE GEORGETOWN LAW JOURNAL ONLINE [VOL. 108 B. IMPROPER PARAMETRIC OUTSOURCING…………………………65 C. METHODOLOGICAL INACCESSIBILITY…………………………...68 III. THE FUTURE OF JUDGING AND CORPUS LINGUISTICS………………70 INTRODUCTION “Corpus linguistics” may sound like a forensic investigative procedure on CSI or NCIS, but the reality is far less dramatic—though no less important.
    [Show full text]
  • TEI and the Documentation of Mixtepec-Mixtec Jack Bowers
    Language Documentation and Standards in Digital Humanities: TEI and the documentation of Mixtepec-Mixtec Jack Bowers To cite this version: Jack Bowers. Language Documentation and Standards in Digital Humanities: TEI and the documen- tation of Mixtepec-Mixtec. Computation and Language [cs.CL]. École Pratique des Hauts Études, 2020. English. tel-03131936 HAL Id: tel-03131936 https://tel.archives-ouvertes.fr/tel-03131936 Submitted on 4 Feb 2021 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Préparée à l’École Pratique des Hautes Études Language Documentation and Standards in Digital Humanities: TEI and the documentation of Mixtepec-Mixtec Soutenue par Composition du jury : Jack BOWERS Guillaume, JACQUES le 8 octobre 2020 Directeur de Recherche, CNRS Président Alexis, MICHAUD Chargé de Recherche, CNRS Rapporteur École doctorale n° 472 Tomaž, ERJAVEC Senior Researcher, Jožef Stefan Institute Rapporteur École doctorale de l’École Pratique des Hautes Études Enrique, PALANCAR Directeur de Recherche, CNRS Examinateur Karlheinz, MOERTH Senior Researcher, Austrian Center for Digital Humanities Spécialité and Cultural Heritage Examinateur Linguistique Emmanuel, SCHANG Maître de Conférence, Université D’Orléans Examinateur Benoit, SAGOT Chargé de Recherche, Inria Examinateur Laurent, ROMARY Directeur de recherche, Inria Directeur de thèse 1.
    [Show full text]
  • Natural Language Processing, Sentiment Analysis and Clinical Analytics
    Natural Language Processing, Sentiment Analysis and Clinical Analytics Adil Rajput ([email protected]) Assistant Professor, Information System Department, Effat University An Nazlah Al Yamaniyyah, Jeddah 22332, Jeddah, Saudi Arabia Abstract Recent advances in Big Data has prompted health care practitioners to utilize the data available on social media to discern sentiment and emotions’ expression. Health Informatics and Clinical Analytics depend heavily on information gathered from diverse sources. Traditionally, a healthcare practitioner will ask a patient to fill out a questionnaire that will form the basis of diagnosing the medical condition. However, medical practitioners have access to many sources of data including the patients’ writings on various media. Natural Language Processing (NLP) allows researchers to gather such data and analyze it to glean the underlying meaning of such writings. The field of sentiment analysis – applied to many other domains – depend heavily on techniques utilized by NLP. This work will look into various prevalent theories underlying the NLP field and how they can be leveraged to gather users’ sentiments on social media. Such sentiments can be culled over a period of time thus minimizing the errors introduced by data input and other stressors. Furthermore, we look at some applications of sentiment analysis and application of NLP to mental health. The reader will also learn about the NLTK toolkit that implements various NLP theories and how they can make the data scavenging process a lot easier. Keywords: Natural Language Processing (NLP), Data scraping, social media analysis, Sentiment Analysis, Helathcare analytics, Clinical Analytics, Machine Learning, Corpus 1. Introduction The Big Data revolution has changed the way scientists approach problems in almost every (if not all) area of research.
    [Show full text]