Healthcare NER Models Using Language Model Pretraining Empirical Evaluation of Healthcare NER Model Performance with Limited Training Data

Total Page:16

File Type:pdf, Size:1020Kb

Healthcare NER Models Using Language Model Pretraining Empirical Evaluation of Healthcare NER Model Performance with Limited Training Data Healthcare NER Models Using Language Model Pretraining Empirical Evaluation of Healthcare NER Model Performance with Limited Training Data Amogh Kamat Tarcar * Aashis Tiwari Dattaraj Rao Persistent Systems Limited, Goa, India Persistent Systems Limited, Pune, India Persistent Systems Limited, Goa, India [email protected] [email protected] [email protected] Vineet Naique Dhaimodker Penjo Rebelo Rahul Desai National Institute of Technology, Goa, India National Institute of Technology, Goa, India National Institute of Technology, Goa, India [email protected] [email protected] [email protected] ABSTRACT ACM Reference format: Amogh Kamat Tarcar, Aashis Tiwari, Dattaraj Rao, Vineet Naique In this paper, we present our approach to extracting structured DhaimodKer, Penjo Rebelo and Rahul Desai. 2020. Healthcare NER Models information from unstructured Electronic Health Records (EHR) Using Language Model Pretraining: Empirical Evaluation of Healthcare [2] which can be used to, for example, study adverse drug reactions NER Model Performance with Limited Training Data. In Proceedings of in patients due to chemicals in their products. Our solution uses a Health Search and Data Mining Workshop (HSDM 2020) in the 13th ACM combination of Natural Language Processing (NLP) techniques International WSDM Conference (WSDM 2020). ACM, Houston, TX, USA. and a web-based annotation tool to optimize the performance of a https://doi.org/10.1145/3336191.3371879 custom Named Entity Recognition (NER) [1] model trained on a limited amount of EHR training data. This worK was presented at 1. INTRODUCTION the first Health Search and Data Mining WorKshop (HSDM 2020) [26]. Extracting structured information from unstructured text such as EHRs and medical literature has always been a challenging tasK. We showcase a combination of tools and techniques leveraging the Recent advancements in machine learning taKe advantage of the recent advancements in NLP aimed at targeting domain shifts by large text corpora available in scientific literature as well as medical applying transfer learning and language model pre-training and pharmaceutical web sites and train systems which can be techniques [3]. We present a comparison of our technique to the leveraged for several NLP tasKs ranging from text mining to current popular approaches and show the effective increase in question answering. Along with progress in the research space, performance of the NER model and the reduction in time to there has been significant progress in the libraries and tools annotate data.A Key observation of the results presented is that the available for industry use. F1 score of model (0.734) trained with our approach with just 50% of available training data outperforms the F1 score of the blank The specific problem we focused on was extracting adverse drug spacy model without language model component (0.704) trained reactions from EHRs using NER. The solution required us to with 100% of the available training data. extract Key entities such as prescribed drugs with dosage and the symptoms and diseases mentioned in the EHRs. The extracted We also demonstrate an annotation tool to minimize domain expert entities would be processed further downstream to linK the entities time and the manual effort required to generate such a training and leverage dictionary-based techniques for flagging any dataset. Further, we plan to release the annotated dataset as well as symptoms which could potentially be adverse drug reactions of the the pre-trained model to the community to further research in prescribed medicines. medical health records. A crucial component in our devised solution employed a custom KEYWORDS NER model for extracting Key entities from EHRs. The state-of- the-art Named Entity Recognition models built using deep learning Transfer Learning, Named Entity Recognition, Natural Language techniques [13] extract entities from text sentences by not only Processing, Pre-Training, Language Modeling, Electronic Health identifying the keywords or linguistic shape of entities, but also by Records (EHR), Annotations. leveraging the context of the entity in the sentence. Furthermore, with language model pre-trained embeddings, the NER models *Corresponding Author leverage the proximity of other words which appear along with the Presented at the first Health Search and Data Mining Workshop (HSDM 2020) in entity in domain specific literature. the 13th ACM International WSDM Conference (WSDM 2020) held in Feb 2020 Houston, Texas, USA Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). HSDM 2020, Feb, 2020, Houston, Texas USA A.Tarcar et al. One of the Key challenges in training NLP based models is the Using tasK-specific annotation tools can minimize the time to availability of reasonable-sized, high-quality annotated datasets. generate high-quality annotated datasets for training models. The Further, in a typical industrial setting, the relative difficulty in traditional process of annotating data is slow, but fundamental to garnering significant domain expert time, and the lacK of tools and most NLP models. It often acts as a hindrance in evaluating and techniques for effective annotation along with the ability to review benchmarKing multiple models, as well as in parameter tuning of such annotations to minimize human errors , affects research and models. Many tools such as Doccano [6] exist in the open-source benchmarking new learning techniques and algorithms. community that help in solving this problem. We developed an in- house tool which we could customize for speeding up the Additionally, models liKe NER often need significant amount of annotation process. data to generalize well to a vocabulary and language domain. Such vast amounts of training data are often unavailable or difficult to manufacture or synthesize.To bridge the gap between academic 3. USE CASE DETAILS developments and industrial requirements, we designed a series of experiments employing transfer learning from pre-trained models Studying adverse reactions due to chemicals in a drug on the patient while worKing with a comparatively smaller dataset. is central to drug development in healthcare. Pharmacovigilance (PV) [21] as described by WHO, is defined as “the science and Transfer learning techniques [3] are largely successful in the image activities relating to the detection, assessment, understanding and domain and are advancing steadily in natural language domain with prevention of adverse effects or any other drug-related problem.” the availability of pre-trained language embeddings and pre-trained Pharmaceutical companies often want to understand the conditions models. and pre-conditions under which a drug might have an adverse reaction on a patient. This would help in research and studies of the In this paper we present findings of our experiments to solve the drugs and also reduce or prevent risKs of any harm to the patient. industrial problem of training NER models with limited data using spacy [7] , a state-of-the-art industrial strength natural language Co-occurrence of disease and chemicals in an EHR of a patient is processing package, along with the latest techniques in transfer useful in studies and research for most pharmaceutical companies. learning. However, EHRs are unstructured data and additional processing is required to extract structured information such as named entities of The paper is organized as follows. Section 2 describes the interest. Such extraction can lead to significant savings of manual motivation for our experiments followed by Section 3 discusses the labor and minimizing the time taKen to get a new drug to market. problem and our solution, both in algorithmic and implementation terms, and evaluates the results produced by our solution. Section We developed custom healthcare NER models to extract phrases 4 discusses the results and Section 5 concludes and suggests related to (pharmaceutical) chemicals with dosage, diseases and directions for future worK. symptoms from EHRs. As the entities were specific to the domain text, an in-house annotated dataset was created using our custom- built annotation tool. A number of experiments were designed and 2. MOTIVATION executed for training custom NER models on annotated data from Recent advancements in NLP also Known as the ImageNet moment base models (spacy[7] and scispacy[8]) using transfer learning. in NLP [3], have shown significant improvements in many NLP Section 3.1 describes the dataset preparation followed by Section tasKs using transfer learning. Language models like ELMo [4] and 3.2 which presents an architecture overview. Section 3.3 presents BERT [5] have shown the effect of language model pre-training on experiment details and Section 3.4 describes the results obtained. downstream NLP tasks. Language models are capable of adjusting to changes in the textual domain with a process of fine-tuning. Also, 3.1. DATASET PREPARATION in this self-supervised learning scenario, there is an implicit We created a domain-specific corpus by collating publicly annotation in sentences, i.e. predicts the next token (word) given a available sample medical notes and drug public assessment reports sequence of toKens appearing earlier in the sequence. Given all this, from European Medical Agency (EMA )[9] and Sample Medical we can adjust to a new domain-specific vocabulary with very little Transcripts [10]. training time and almost
Recommended publications
  • 15 Things You Should Know About Spacy
    15 THINGS YOU SHOULD KNOW ABOUT SPACY ALEXANDER CS HENDORF @ EUROPYTHON 2020 0 -preface- Natural Language Processing Natural Language Processing (NLP): A Unstructured Data Avalanche - Est. 20% of all data - Structured Data - Data in databases, format-xyz Different Kinds of Data - Est. 80% of all data, requires extensive pre-processing and different methods for analysis - Unstructured Data - Verbal, text, sign- communication - Pictures, movies, x-rays - Pink noise, singularities Some Applications of NLP - Dialogue systems (Chatbots) - Machine Translation - Sentiment Analysis - Speech-to-text and vice versa - Spelling / grammar checking - Text completion Many Smart Things in Text Data! Rule-Based Exploratory / Statistical NLP Use Cases � Alexander C. S. Hendorf [email protected] -Partner & Principal Consultant Data Science & AI @hendorf Consulting and building AI & Data Science for enterprises. -Python Software Foundation Fellow, Python Softwareverband chair, Emeritus EuroPython organizer, Currently Program Chair of EuroSciPy, PyConDE & PyData Berlin, PyData community organizer -Speaker Europe & USA MongoDB World New York / San José, PyCons, CEBIT Developer World, BI Forum, IT-Tage FFM, PyData London, Berlin, PyParis,… here! NLP Alchemy Toolset 1 - What is spaCy? - Stable Open-source library for NLP: - Supports over 55+ languages - Comes with many pretrained language models - Designed for production usage - Advantages: - Fast and efficient - Relatively intuitive - Simple deep learning implementation - Out-of-the-box support for: - Named entity recognition - Part-of-speech (POS) tagging - Labelled dependency parsing - … 2 – Building Blocks of spaCy 2 – Building Blocks I. - Tokenization Segmenting text into words, punctuations marks - POS (Part-of-speech tagging) cat -> noun, scratched -> verb - Lemmatization cats -> cat, scratched -> scratch - Sentence Boundary Detection Hello, Mrs. Poppins! - NER (Named Entity Recognition) Marry Poppins -> person, Apple -> company - Serialization Saving NLP documents 2 – Building Blocks II.
    [Show full text]
  • Classifying Relevant Social Media Posts During Disasters Using Ensemble of Domain-Agnostic and Domain-Specific Word Embeddings
    AAAI 2019 Fall Symposium Series: AI for Social Good 1 Classifying Relevant Social Media Posts During Disasters Using Ensemble of Domain-agnostic and Domain-specific Word Embeddings Ganesh Nalluru, Rahul Pandey, Hemant Purohit Volgenau School of Engineering, George Mason University Fairfax, VA, 22030 fgn, rpandey4, [email protected] Abstract to provide relevant intelligence inputs to the decision-makers (Hughes and Palen 2012). However, due to the burstiness of The use of social media as a means of communication has the incoming stream of social media data during the time of significantly increased over recent years. There is a plethora of information flow over the different topics of discussion, emergencies, it is really hard to filter relevant information which is widespread across different domains. The ease of given the limited number of emergency service personnel information sharing has increased noisy data being induced (Castillo 2016). Therefore, there is a need to automatically along with the relevant data stream. Finding such relevant filter out relevant posts from the pile of noisy data coming data is important, especially when we are dealing with a time- from the unconventional information channel of social media critical domain like disasters. It is also more important to filter in a real-time setting. the relevant data in a real-time setting to timely process and Our contribution: We provide a generalizable classification leverage the information for decision support. framework to classify relevant social media posts for emer- However, the short text and sometimes ungrammatical nature gency services. We introduce the framework in the Method of social media data challenge the extraction of contextual section, including the description of domain-agnostic and information cues, which could help differentiate relevant vs.
    [Show full text]
  • Arxiv:2002.06235V1
    Semantic Relatedness and Taxonomic Word Embeddings Magdalena Kacmajor John D. Kelleher Innovation Exchange ADAPT Research Centre IBM Ireland Technological University Dublin [email protected] [email protected] Filip Klubickaˇ Alfredo Maldonado ADAPT Research Centre Underwriters Laboratories Technological University Dublin [email protected] [email protected] Abstract This paper1 connects a series of papers dealing with taxonomic word embeddings. It begins by noting that there are different types of semantic relatedness and that different lexical representa- tions encode different forms of relatedness. A particularly important distinction within semantic relatedness is that of thematic versus taxonomic relatedness. Next, we present a number of ex- periments that analyse taxonomic embeddings that have been trained on a synthetic corpus that has been generated via a random walk over a taxonomy. These experiments demonstrate how the properties of the synthetic corpus, such as the percentage of rare words, are affected by the shape of the knowledge graph the corpus is generated from. Finally, we explore the interactions between the relative sizes of natural and synthetic corpora on the performance of embeddings when taxonomic and thematic embeddings are combined. 1 Introduction Deep learning has revolutionised natural language processing over the last decade. A key enabler of deep learning for natural language processing has been the development of word embeddings. One reason for this is that deep learning intrinsically involves the use of neural network models and these models only work with numeric inputs. Consequently, applying deep learning to natural language processing first requires developing a numeric representation for words. Word embeddings provide a way of creating numeric representations that have proven to have a number of advantages over traditional numeric rep- resentations of language.
    [Show full text]
  • Injection of Automatically Selected Dbpedia Subjects in Electronic
    Injection of Automatically Selected DBpedia Subjects in Electronic Medical Records to boost Hospitalization Prediction Raphaël Gazzotti, Catherine Faron Zucker, Fabien Gandon, Virginie Lacroix-Hugues, David Darmon To cite this version: Raphaël Gazzotti, Catherine Faron Zucker, Fabien Gandon, Virginie Lacroix-Hugues, David Darmon. Injection of Automatically Selected DBpedia Subjects in Electronic Medical Records to boost Hos- pitalization Prediction. SAC 2020 - 35th ACM/SIGAPP Symposium On Applied Computing, Mar 2020, Brno, Czech Republic. 10.1145/3341105.3373932. hal-02389918 HAL Id: hal-02389918 https://hal.archives-ouvertes.fr/hal-02389918 Submitted on 16 Dec 2019 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Injection of Automatically Selected DBpedia Subjects in Electronic Medical Records to boost Hospitalization Prediction Raphaël Gazzotti Catherine Faron-Zucker Fabien Gandon Université Côte d’Azur, Inria, CNRS, Université Côte d’Azur, Inria, CNRS, Inria, Université Côte d’Azur, CNRS, I3S, Sophia-Antipolis, France I3S, Sophia-Antipolis, France
    [Show full text]
  • Combining Word Embeddings with Semantic Resources
    AutoExtend: Combining Word Embeddings with Semantic Resources Sascha Rothe∗ LMU Munich Hinrich Schutze¨ ∗ LMU Munich We present AutoExtend, a system that combines word embeddings with semantic resources by learning embeddings for non-word objects like synsets and entities and learning word embeddings that incorporate the semantic information from the resource. The method is based on encoding and decoding the word embeddings and is flexible in that it can take any word embeddings as input and does not need an additional training corpus. The obtained embeddings live in the same vector space as the input word embeddings. A sparse tensor formalization guar- antees efficiency and parallelizability. We use WordNet, GermaNet, and Freebase as semantic resources. AutoExtend achieves state-of-the-art performance on Word-in-Context Similarity and Word Sense Disambiguation tasks. 1. Introduction Unsupervised methods for learning word embeddings are widely used in natural lan- guage processing (NLP). The only data these methods need as input are very large corpora. However, in addition to corpora, there are many other resources that are undoubtedly useful in NLP, including lexical resources like WordNet and Wiktionary and knowledge bases like Wikipedia and Freebase. We will simply refer to these as resources. In this article, we present AutoExtend, a method for enriching these valuable resources with embeddings for non-word objects they describe; for example, Auto- Extend enriches WordNet with embeddings for synsets. The word embeddings and the new non-word embeddings live in the same vector space. Many NLP applications benefit if non-word objects described by resources—such as synsets in WordNet—are also available as embeddings.
    [Show full text]
  • Word-Like Character N-Gram Embedding
    Word-like character n-gram embedding Geewook Kim and Kazuki Fukui and Hidetoshi Shimodaira Department of Systems Science, Graduate School of Informatics, Kyoto University Mathematical Statistics Team, RIKEN Center for Advanced Intelligence Project fgeewook, [email protected], [email protected] Abstract Table 1: Top-10 2-grams in Sina Weibo and 4-grams in We propose a new word embedding method Japanese Twitter (Experiment 1). Words are indicated called word-like character n-gram embed- by boldface and space characters are marked by . ding, which learns distributed representations FNE WNE (Proposed) Chinese Japanese Chinese Japanese of words by embedding word-like character n- 1 ][ wwww 自己 フォロー grams. Our method is an extension of recently 2 。␣ !!!! 。␣ ありがと proposed segmentation-free word embedding, 3 !␣ ありがと ][ wwww 4 .. りがとう 一个 !!!! which directly embeds frequent character n- 5 ]␣ ございま 微博 めっちゃ grams from a raw corpus. However, its n-gram 6 。。 うござい 什么 んだけど vocabulary tends to contain too many non- 7 ,我 とうござ 可以 うござい 8 !! ざいます 没有 line word n-grams. We solved this problem by in- 9 ␣我 がとうご 吗? 2018 troducing an idea of expected word frequency. 10 了, ください 哈哈 じゃない Compared to the previously proposed meth- ods, our method can embed more words, along tion tools are used to determine word boundaries with the words that are not included in a given in the raw corpus. However, these segmenters re- basic word dictionary. Since our method does quire rich dictionaries for accurate segmentation, not rely on word segmentation with rich word which are expensive to prepare and not always dictionaries, it is especially effective when the available.
    [Show full text]
  • Information Extraction Based on Named Entity for Tourism Corpus
    Information Extraction based on Named Entity for Tourism Corpus Chantana Chantrapornchai Aphisit Tunsakul Dept. of Computer Engineering Dept. of Computer Engineering Faculty of Engineering Faculty of Engineering Kasetsart University Kasetsart University Bangkok, Thailand Bangkok, Thailand [email protected] [email protected] Abstract— Tourism information is scattered around nowa- The ontology is extracted based on HTML web structure, days. To search for the information, it is usually time consuming and the corpus is based on WordNet. For these approaches, to browse through the results from search engine, select and the time consuming process is the annotation which is to view the details of each accommodation. In this paper, we present a methodology to extract particular information from annotate the type of name entity. In this paper, we target at full text returned from the search engine to facilitate the users. the tourism domain, and aim to extract particular information Then, the users can specifically look to the desired relevant helping for ontology data acquisition. information. The approach can be used for the same task in We present the framework for the given named entity ex- other domains. The main steps are 1) building training data traction. Starting from the web information scraping process, and 2) building recognition model. First, the tourism data is gathered and the vocabularies are built. The raw corpus is used the data are selected based on the HTML tag for corpus to train for creating vocabulary embedding. Also, it is used building. The data is used for model creation for automatic for creating annotated data.
    [Show full text]
  • A Second Look at Word Embeddings W4705: Natural Language Processing
    A second look at word embeddings W4705: Natural Language Processing Fei-Tzin Lee October 23, 2019 Fei-Tzin Lee Word embeddings October 23, 2019 1 / 39 Overview Last time... • Distributional representations (SVD) • word2vec • Analogy performance Fei-Tzin Lee Word embeddings October 23, 2019 2 / 39 Overview This time • Homework-related topics • Non-homework topics Fei-Tzin Lee Word embeddings October 23, 2019 3 / 39 Overview Outline 1 GloVe 2 How good are our embeddings? 3 The math behind the models? 4 Word embeddings, new and improved Fei-Tzin Lee Word embeddings October 23, 2019 4 / 39 GloVe Outline 1 GloVe 2 How good are our embeddings? 3 The math behind the models? 4 Word embeddings, new and improved Fei-Tzin Lee Word embeddings October 23, 2019 5 / 39 GloVe A recap of GloVe Our motivation: • SVD places too much emphasis on unimportant matrix entries • word2vec never gets to look at global co-occurrence statistics Can we create a new model that balances the strengths of both of these to better express linear structure? Fei-Tzin Lee Word embeddings October 23, 2019 6 / 39 GloVe Setting We'll use more or less the same setting as in previous models: • A corpus D • A word vocabulary V from which both target and context words are drawn • A co-occurrence matrix Mij , from which we can calculate conditional probabilities Pij = Mij =Mi Fei-Tzin Lee Word embeddings October 23, 2019 7 / 39 GloVe The GloVe objective, in overview Idea: we want to capture not individual probabilities of co-occurrence, but ratios of co-occurrence probability between pairs (wi ; wk ) and (wj ; wk ).
    [Show full text]
  • Learned in Speech Recognition: Contextual Acoustic Word Embeddings
    LEARNED IN SPEECH RECOGNITION: CONTEXTUAL ACOUSTIC WORD EMBEDDINGS Shruti Palaskar∗, Vikas Raunak∗ and Florian Metze Carnegie Mellon University, Pittsburgh, PA, U.S.A. fspalaska j vraunak j fmetze [email protected] ABSTRACT model [10, 11, 12] trained for direct Acoustic-to-Word (A2W) speech recognition [13]. Using this model, we jointly learn to End-to-end acoustic-to-word speech recognition models have re- automatically segment and classify input speech into individual cently gained popularity because they are easy to train, scale well to words, hence getting rid of the problem of chunking or requiring large amounts of training data, and do not require a lexicon. In addi- pre-defined word boundaries. As our A2W model is trained at the tion, word models may also be easier to integrate with downstream utterance level, we show that we can not only learn acoustic word tasks such as spoken language understanding, because inference embeddings, but also learn them in the proper context of their con- (search) is much simplified compared to phoneme, character or any taining sentence. We also evaluate our contextual acoustic word other sort of sub-word units. In this paper, we describe methods embeddings on a spoken language understanding task, demonstrat- to construct contextual acoustic word embeddings directly from a ing that they can be useful in non-transcription downstream tasks. supervised sequence-to-sequence acoustic-to-word speech recog- Our main contributions in this paper are the following: nition model using the learned attention distribution. On a suite 1. We demonstrate the usability of attention not only for aligning of 16 standard sentence evaluation tasks, our embeddings show words to acoustic frames without any forced alignment but also for competitive performance against a word2vec model trained on the constructing Contextual Acoustic Word Embeddings (CAWE).
    [Show full text]
  • Cw2vec: Learning Chinese Word Embeddings with Stroke N-Gram Information
    The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18) cw2vec: Learning Chinese Word Embeddings with Stroke n-gram Information Shaosheng Cao,1,2 Wei Lu,2 Jun Zhou,1 Xiaolong Li1 1 AI Department, Ant Financial Services Group 2 Singapore University of Technology and Design {shaosheng.css, jun.zhoujun, xl.li}@antfin.com [email protected] Abstract We propose cw2vec, a novel method for learning Chinese word embeddings. It is based on our observation that ex- ploiting stroke-level information is crucial for improving the learning of Chinese word embeddings. Specifically, we de- sign a minimalist approach to exploit such features, by us- ing stroke n-grams, which capture semantic and morpholog- ical level information of Chinese words. Through qualita- tive analysis, we demonstrate that our model is able to ex- Figure 1: Radical v.s. components v.s. stroke n-gram tract semantic information that cannot be captured by exist- ing methods. Empirical results on the word similarity, word analogy, text classification and named entity recognition tasks show that the proposed approach consistently outperforms Bojanowski et al. 2016; Cao and Lu 2017). While these ap- state-of-the-art approaches such as word-based word2vec and proaches were shown effective, they largely focused on Eu- GloVe, character-based CWE, component-based JWE and ropean languages such as English, Spanish and German that pixel-based GWE. employ the Latin script in their writing system. Therefore the methods developed are not directly applicable to lan- guages such as Chinese that employ a completely different 1. Introduction writing system. Word representation learning has recently received a sig- In Chinese, each word typically consists of less charac- nificant amount of attention in the field of natural lan- ters than English1, where each character conveys fruitful se- guage processing (NLP).
    [Show full text]
  • Effects of Pre-Trained Word Embeddings on Text-Based Deception Detection
    2020 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress Effects of Pre-trained Word Embeddings on Text-based Deception Detection David Nam, Jerin Yasmin, Farhana Zulkernine School of Computing Queen’s University Kingston, Canada Email: {david.nam, jerin.yasmin, farhana.zulkernine}@queensu.ca Abstract—With e-commerce transforming the way in which be wary of any deception that the reviews might present. individuals and businesses conduct trades, online reviews have Deception can be defined as a dishonest or illegal method become a great source of information among consumers. With in which misleading information is intentionally conveyed 93% of shoppers relying on online reviews to make their purchasing decisions, the credibility of reviews should be to others for a specific gain [2]. strongly considered. While detecting deceptive text has proven Companies or certain individuals may attempt to deceive to be a challenge for humans to detect, it has been shown consumers for various reasons to influence their decisions that machines can be better at distinguishing between truthful about purchasing a product or a service. While the motives and deceptive online information by applying pattern analysis for deception may be hard to determine, it is clear why on a large amount of data. In this work, we look at the use of several popular pre-trained word embeddings (Word2Vec, businesses may have a vested interest in producing deceptive GloVe, fastText) with deep neural network models (CNN, reviews about their products.
    [Show full text]
  • Knowledge-Powered Deep Learning for Word Embedding
    Knowledge-Powered Deep Learning for Word Embedding Jiang Bian, Bin Gao, and Tie-Yan Liu Microsoft Research {jibian,bingao,tyliu}@microsoft.com Abstract. The basis of applying deep learning to solve natural language process- ing tasks is to obtain high-quality distributed representations of words, i.e., word embeddings, from large amounts of text data. However, text itself usually con- tains incomplete and ambiguous information, which makes necessity to leverage extra knowledge to understand it. Fortunately, text itself already contains well- defined morphological and syntactic knowledge; moreover, the large amount of texts on the Web enable the extraction of plenty of semantic knowledge. There- fore, it makes sense to design novel deep learning algorithms and systems in order to leverage the above knowledge to compute more effective word embed- dings. In this paper, we conduct an empirical study on the capacity of leveraging morphological, syntactic, and semantic knowledge to achieve high-quality word embeddings. Our study explores these types of knowledge to define new basis for word representation, provide additional input information, and serve as auxiliary supervision in deep learning, respectively. Experiments on an analogical reason- ing task, a word similarity task, and a word completion task have all demonstrated that knowledge-powered deep learning can enhance the effectiveness of word em- bedding. 1 Introduction With rapid development of deep learning techniques in recent years, it has drawn in- creasing attention to train complex and deep models on large amounts of data, in order to solve a wide range of text mining and natural language processing (NLP) tasks [4, 1, 8, 13, 19, 20].
    [Show full text]