Scispacy: Fast and Robust Models for Biomedical Natural Language Processing

Total Page:16

File Type:pdf, Size:1020Kb

Scispacy: Fast and Robust Models for Biomedical Natural Language Processing ScispaCy: Fast and Robust Models for Biomedical Natural Language Processing Mark Neumann, Daniel King, Iz Beltagy, Waleed Ammar Allen Institute for Artificial Intelligence, Seattle, WA, USA fmarkn,daniel,beltagy,[email protected] Abstract Despite recent advances in natural language processing, many statistical models for pro- cessing text perform extremely poorly un- der domain shift. Processing biomedical and clinical text is a critically important applica- tion area of natural language processing, for which there are few robust, practical, publicly available models. This paper describes scis- paCy, a new Python library and models for practical biomedical/scientific text processing, which heavily leverages the spaCy library. We detail the performance of two packages of models released in scispaCy and demonstrate their robustness on several tasks and datasets. Figure 1: Growth of the annual number of cited ref- Models and code are available at https:// erences from 1650 to 2012 in the medical and health allenai.github.io/scispacy/. sciences (citing publications from 1980 to 2012). Fig- ure from (Bornmann and Mutz, 2014). 1 Introduction The publication rate in the medical and biomedical tools which cover more classical natural language sciences is growing at an exponential rate (Born- processing (NLP) tasks such as the GENIA tag- mann and Mutz, 2014). The information over- ger (Tsuruoka et al., 2005; Tsuruoka and Tsujii, load problem is widespread across academia, but 2005), or phrase structure parsers such as those is particularly apparent in the biomedical sciences, presented in McClosky and Charniak(2008) typi- where individual papers may contain specific dis- cally do not make use of new research innovations coveries relating to a dizzying variety of genes, such as word representations or neural networks. drugs, and proteins. In order to cope with the sheer In this paper, we introduce scispaCy, a spe- volume of new scientific knowledge, there have cialized NLP library for processing biomedical been many attempts to automate the process of ex- texts which builds on the robust spaCy library,1 tracting entities, relations, protein interactions and and document its performance relative to state other structured knowledge from scientific papers of the art models for part of speech (POS) tag- (Wei et al., 2016; Ammar et al., 2018; Poon et al., ging, dependency parsing, named entity recogni- 2014). tion (NER) and sentence segmentation. Specifi- Although there exists a wealth of tools for cally, we: processing biomedical text, many focus primar- ily on named entity recognition and disambigua- • Release a reformatted version of the GENIA tion. MetaMap and MetaMapLite (Aronson, 2001; 1.0 (Kim et al., 2003) corpus converted into Demner-Fushman et al., 2017), the two most Universal Dependencies v1.0 and aligned widely used and supported tools for biomedical with the original text from the PubMed ab- text processing, support entity linking with nega- stracts. tion detection and acronym resolution. However, 1spacy.io 319 Proceedings of the BioNLP 2019 workshop, pages 319–327 Florence, Italy, August 1, 2019. c 2019 Association for Computational Linguistics Model Vocab Size Vector Min Min Processing Times Per Count Word Doc Software Package Abstract (ms) Sentence (ms) Freq Freq NLP4J (java) 19 2 en core sci sm 58,338 0 50 5 Genia Tagger (c++) 73 3 en core sci md 101,678 98,131 20 5 Biaffine (TF) 272 29 Biaffine (TF + 12 CPUs) 72 7 Table 1: Vocabulary statistics for the two core packages jPTDP (Dynet) 905 97 Dexter v2.1.0 208 84 in scispaCy. MetaMapLite v3.6.2 293 89 en core sci sm 32 4 en core sci md 33 4 • Benchmark 9 named entity recognition mod- els for more specific entity extraction ap- Table 2: Wall clock comparison of different publicly plications demonstrating competitive perfor- available biomedical NLP pipelines. All experiments mance when compared to strong baselines. run on a single machine with 12 Intel(R) Core(TM) i7-6850K CPU @ 3.60GHz and 62GB RAM. For the • Release and evaluate two fast and convenient Biaffine Parser, a pre-compiled Tensorflow binary with pipelines for biomedical text, which include support for AVX2 instructions was used in a good faith tokenization, part of speech tagging, depen- attempt to optimize the implementation. Dynet does support the Intel MKL, but requires compilation from dency parsing and named entity recognition. scratch and as such, does not represent an “off the shelf” system. TF is short for Tensorflow. 2 Overview of (sci)spaCy In this section, we briefly describe the models used in the spaCy library and describe how we build on Processing Speed. To emphasize the efficiency them in scispaCy. and practical utility of the end-to-end pipeline pro- vided by scispaCy packages, we perform a speed spaCy. The Python-based spaCy library (Hon- 2 comparison with several other publicly available nibal and Montani, 2017) provides a variety of processing pipelines for biomedical text using 10k practical tools for text processing in multiple lan- randomly selected PubMed abstracts. We report guages. Their models have emerged as the defacto results with and without segmenting the abstracts standard for practical NLP due to their speed, ro- into sentences since some of the libraries (e.g., bustness and close to state of the art performance. GENIA tagger) are designed to operate on sen- As the spaCy models are popular and the spaCy tences. API is widely known to many potential users, we choose to build upon the spaCy library for creating As shown in Table2, both models released a biomedical text processing pipeline. in scispaCy demonstrate competitive speed to pipelines written in C++ and Java, languages de- scispaCy. Our goal is to develop scispaCy as a signed for production settings. robust, efficient and performant NLP library to satisfy the primary text processing needs in the Whilst scispaCy is not as fast as pipelines biomedical domain. In this release of scispaCy, designed for purely production use-cases (e.g., we retrain spaCy3 models for POS tagging, depen- NLP4J), it has the benefit of straightforward in- dency parsing, and NER using datasets relevant tegration with the large ecosystem of Python li- to biomedical text, and enhance the tokenization braries for machine learning and text processing. module with additional rules. scispaCy contains Although the comparison in Table2 is not an ap- two core released packages: en core sci sm and ples to apples comparison with other frameworks en core sci md. Models in the en core sci md (different tasks, implementation languages etc), it package have a larger vocabulary and include is useful to understand scispaCy’s runtime in the word vectors, while those in en core sci sm have context of other pipeline components. Running a smaller vocabulary and do not include word vec- scispaCy models in addition to standard Entity tors, as shown in Table1. Linking software such as MetaMap would result in only a marginal increase in overall runtime. 2Source code at https://github.com/ explosion/spaCy In the following section, we describe the POS 3scispaCy models are based on spaCy version 2.0.18 taggers and dependency parsers in scispaCy. 320 3 POS Tagging and Dependency Parsing Package/Model GENIA The joint POS tagging and dependency parsing MarMoT 98.61 model in spaCy is an arc-eager transition-based jPTDP-v1 98.66 parser trained with a dynamic oracle, similar to NLP4J-POS 98.80 Goldberg and Nivre(2012). Features are CNN BiLSTM-CRF 98.44 representations of token features and shared across BiLSTM-CRF- charcnn 98.89 all pipeline models (Kiperwasser and Goldberg, BiLSTM-CRF - char lstm 98.85 2016; Zhang and Weiss, 2016). Next, we describe en core sci sm 98.38 the data we used to train it in scispaCy. en core sci md 98.51 3.1 Datasets Table 3: Part of Speech tagging results on the GENIA Test set. GENIA 1.0 Dependencies. To train the depen- dency parser and part of speech tagger in both Package/Model UAS LAS released models, we convert the treebank of Mc- Stanford-NNdep 89.02 87.56 Closky and Charniak(2008), 4 which is based on NLP4J-dep 90.25 88.87 the GENIA 1.0 corpus (Kim et al., 2003), to jPTDP-v1 91.89 90.27 Universal Dependencies v1.0 using the Stanford Stanford-Biaffine-v2 92.64 91.23 Dependency Converter (Schuster and Manning, Stanford-Biaffine-v2(Gold POS) 92.84 91.92 2016). As this dataset has POS tags annotated, we use it to train the POS tagger jointly with the de- en core sci sm-SD 90.31 88.65 pendency parser in both released models. en core sci md-SD 90.66 88.98 As we believe the Universal Dependencies con- en core sci sm 89.69 87.67 verted from the original GENIA 1.0 corpus are en core sci md 90.60 88.79 generally useful, we have released them as a sep- arate contribution of this paper.5 In this data re- Table 4: Dependency Parsing results on the GENIA 1.0 lease, we also align the converted dependency corpus converted to dependencies using the Stanford parses to their original text spans in the raw, un- Universal Dependency Converter. We additionally pro- tokenized abstracts from the original release,6 and vide evaluations using Stanford Dependencies(SD) in include the PubMed metadata for the abstracts order for comparison relative to the results reported in (Nguyen and Verspoor, 2018). which was discarded in the GENIA corpus re- leased by McClosky and Charniak(2008). We hope that this raw format can emerge as a resource OntoNotes 5.0. To increase the robustness of the for practical evaluation in the biomedical domain dependency parser and POS tagger to generic text, of core NLP tasks such as tokenization, sentence we make use of the OntoNotes 5.0 corpus7 when segmentation and joint models of syntax.
Recommended publications
  • Predicting the Correct Entities in Named-Entity Linking
    Separating the Signal from the Noise: Predicting the Correct Entities in Named-Entity Linking Drew Perkins Uppsala University Department of Linguistics and Philology Master Programme in Language Technology Master’s Thesis in Language Technology, 30 ects credits June 9, 2020 Supervisors: Gongbo Tang, Uppsala University Thorsten Jacobs, Seavus Abstract In this study, I constructed a named-entity linking system that maps between contextual word embeddings and knowledge graph embeddings to predict correct entities. To establish a named-entity linking system, I rst applied named-entity recognition to identify the entities of interest. I then performed candidate gener- ation via locality sensitivity hashing (LSH), where a candidate group of potential entities were created for each identied entity. Afterwards, my named-entity dis- ambiguation component was performed to select the most probable candidate. By concatenating contextual word embeddings and knowledge graph embeddings in my disambiguation component, I present a novel approach to named-entity link- ing. I conducted the experiments with the Kensho-Derived Wikimedia Dataset and the AIDA CoNLL-YAGO Dataset; the former dataset was used for deployment and the later is a benchmark dataset for entity linking tasks. Three deep learning models were evaluated on the named-entity disambiguation component with dierent context embeddings. The evaluation was treated as a classication task, where I trained my models to select the correct entity from a list of candidates. By optimizing the named-entity linking through this methodology, this entire system can be used in recommendation engines with high F1 of 86% using the former dataset. With the benchmark dataset, the proposed method is able to achieve F1 of 79%.
    [Show full text]
  • Three Birds (In the LLOD Cloud) with One Stone: Babelnet, Babelfy and the Wikipedia Bitaxonomy
    Three Birds (in the LLOD Cloud) with One Stone: BabelNet, Babelfy and the Wikipedia Bitaxonomy Tiziano Flati and Roberto Navigli Dipartimento di Informatica Sapienza Universit`adi Roma Abstract. In this paper we present the current status of linguistic re- sources published as linked data and linguistic services in the LLOD cloud in our research group, namely BabelNet, Babelfy and the Wikipedia Bitaxonomy. We describe them in terms of their salient aspects and ob- jectives and discuss the benefits that each of these potentially brings to the world of LLOD NLP-aware services. We also present public Web- based services which enable querying, exploring and exporting data into RDF format. 1 Introduction Recent years have witnessed an upsurge in the amount of semantic information published on the Web. Indeed, the Web of Data has been increasing steadily in both volume and variety, transforming the Web into a global database in which resources are linked across sites. It is becoming increasingly critical that existing lexical resources be published as Linked Open Data (LOD), so as to foster integration, interoperability and reuse on the Semantic Web [5]. Thus, lexical resources provided in RDF format can contribute to the creation of the so-called Linguistic Linked Open Data (LLOD), a vision fostered by the Open Linguistic Working Group (OWLG), in which part of the Linked Open Data cloud is made up of interlinked linguistic resources [2]. The multilinguality aspect is key to this vision, in that it enables Natural Language Processing tasks which are not only cross-lingual, but also independent both of the language of the user input and of the linked data exploited to perform the task.
    [Show full text]
  • Cross Lingual Entity Linking with Bilingual Topic Model
    Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence Cross Lingual Entity Linking with Bilingual Topic Model Tao Zhang, Kang Liu and Jun Zhao Institute of Automation, Chinese Academy of Sciences HaiDian District, Beijing, China {tzhang, kliu, jzhao}@nlpr.ia.ac.cn Abstract billion web pages on Internet and 31% of them are written in other languages. This report tells us that the percentage of Cross lingual entity linking means linking an entity web pages written in English are decreasing, which indicates mention in a background source document in one that English pages are becoming less dominant compared language with the corresponding real world entity in with before. Therefore, it becomes more and more important a knowledge base written in the other language. The to maintain the growing knowledge base by discovering and key problem is to measure the similarity score extracting information from all web contents written in between the context of the entity mention and the different languages. Also, inserting new extracted knowledge document of the candidate entity. This paper derived from the information extraction system into a presents a general framework for doing cross knowledge base inevitably needs a system to map the entity lingual entity linking by leveraging a large scale and mentions in one language to their entries in a knowledge base bilingual knowledge base, Wikipedia. We introduce which may be another language. This task is known as a bilingual topic model that mining bilingual topic cross-lingual entity linking. from this knowledge base with the assumption that Intuitively, to resolve the cross-lingual entity linking the same Wikipedia concept documents of two problem, the key is how to define the similarity score different languages share the same semantic topic between the context of entity mention and the document of distribution.
    [Show full text]
  • Babelfying the Multilingual Web: State-Of-The-Art Disambiguation and Entity Linking of Web Pages in 50 Languages
    Babelfying the Multilingual Web: state-of-the-art Disambiguation and Entity Linking of Web pages in 50 languages Roberto Navigli [email protected] http://lcl.uniroma1.it! ERC StG: MultilingualERC Joint Word Starting Sense Disambiguation Grant (MultiJEDI) MultiJEDI No. 259234 1! Roberto Navigli! LIDER EU Project No. 610782! Joint work with Andrea Moro Alessandro Raganato A. Moro, A. Raganato, R. Navigli. Entity Linking meets Word Sense Disambiguation: a Unified Approach. Transactions of the Association for Computational Linguistics (TACL), 2, 2014. Babelfying the Multilingual Web: state-of-the-art Disambiguation and Entity Linking Andrea Moro and Roberto Navigli 2014.05.08 BabelNet & friends3 Roberto Navigli 2014.05.08 BabelNet & friends4 Roberto Navigli Motivation • Domain-specific Web content is available in many languages • Information should be extracted and processed independently of the source/target language • This could be done automatically by means of high- performance multilingual text understanding Babelfying the Multilingual Web: state-of-the-art Disambiguation and Entity Linking Andrea Moro, Alessandro Raganato and Roberto Navigli BabelNet (http://babelnet.org) A wide-coverage multilingual semantic network and encyclopedic dictionary in 50 languages! Named Entities and specialized concepts Concepts from WordNet from Wikipedia Concepts integrated from both resources Babelfying the Multilingual Web: state-of-the-art Disambiguation and Entity Linking Andrea Moro, Alessandro Raganato and Roberto Navigli BabelNet (http://babelnet.org)
    [Show full text]
  • Named Entity Extraction for Knowledge Graphs: a Literature Overview
    Received December 12, 2019, accepted January 7, 2020, date of publication February 14, 2020, date of current version February 25, 2020. Digital Object Identifier 10.1109/ACCESS.2020.2973928 Named Entity Extraction for Knowledge Graphs: A Literature Overview TAREQ AL-MOSLMI , MARC GALLOFRÉ OCAÑA , ANDREAS L. OPDAHL, AND CSABA VERES Department of Information Science and Media Studies, University of Bergen, 5007 Bergen, Norway Corresponding author: Tareq Al-Moslmi ([email protected]) This work was supported by the News Angler Project through the Norwegian Research Council under Project 275872. ABSTRACT An enormous amount of digital information is expressed as natural-language (NL) text that is not easily processable by computers. Knowledge Graphs (KG) offer a widely used format for representing information in computer-processable form. Natural Language Processing (NLP) is therefore needed for mining (or lifting) knowledge graphs from NL texts. A central part of the problem is to extract the named entities in the text. The paper presents an overview of recent advances in this area, covering: Named Entity Recognition (NER), Named Entity Disambiguation (NED), and Named Entity Linking (NEL). We comment that many approaches to NED and NEL are based on older approaches to NER and need to leverage the outputs of state-of-the-art NER systems. There is also a need for standard methods to evaluate and compare named-entity extraction approaches. We observe that NEL has recently moved from being stepwise and isolated into an integrated process along two dimensions: the first is that previously sequential steps are now being integrated into end-to-end processes, and the second is that entities that were previously analysed in isolation are now being lifted in each other's context.
    [Show full text]
  • Entity Linking
    Entity Linking Kjetil Møkkelgjerd Master of Science in Computer Science Submission date: June 2017 Supervisor: Herindrasana Ramampiaro, IDI Norwegian University of Science and Technology Department of Computer Science Abstract Entity linking may be of help to quickly supplement a reader with further information about entities within a text by providing links to a knowledge base containing more information about each entity, and may thus potentially enhance the reading experience. In this thesis, we look at existing solutions, and implement our own deterministic entity linking system based on our research, using our own approach to the problem. Our system extracts all entities within the input text, and then disam- biguates each entity considering the context. The extraction step is han- dled by an external framework, while the disambiguation step focuses on entity-similarities, where similarity is defined by how similar the entities’ categories are, which we measure by using data from a structured knowledge base called DBpedia. We base this approach on the assumption that simi- lar entities usually occur close to each other in written text, thus we select entities that appears to be similar to other nearby entities. Experiments show that our implementation is not as effective as some of the existing systems we use for reference, and that our approach has some weaknesses which should be addressed. For example, DBpedia is not an as consistent knowledge base as we would like, and the entity extraction framework often fail to reproduce the same entity set as the dataset we use for evaluation. However, our solution show promising results in many cases.
    [Show full text]
  • Named-Entity Recognition and Linking with a Wikipedia-Based Knowledge Base Bachelor Thesis Presentation Niklas Baumert [[email protected]]
    Named-Entity Recognition and Linking with a Wikipedia-based Knowledge Base Bachelor Thesis Presentation Niklas Baumert [[email protected]] 2018-11-08 Contents 1 Why and What?—Motivation. 1.1 What is named-entity recognition? 1.2 What is entity linking? 2 How?—Implementation. 2.1 Wikipedia-backed Knowledge Base. 2.2 Entity Recognizer & Linker. 3 Performance Evaluation. 4 Conclusion. 2 1 Why and What—Motivation. 3 Built to provide input files for QLever. ● Specialized use-case. ○ General application as an extra feature. ● Providing a wordsfile and a docsfile as output. ● Wordsfile is input for QLever. ● Support for Wikipedia page titles, Freebase IDs and Wikidata IDs. ○ Convert between those 3 freely afterwards. 4 The docsfile is a list of all sentences with IDs. record id text 1 Anarchism is a political philosophy that advocates [...] 2 These are often described as [...] 5 The wordsfile contains each word of each sentence and determines entities and their likeliness. word Entity? recordID score Anarchism 0 1 1 <Anarchism> 1 1 0.9727 is 0 1 1 a 0 1 1 political 0 1 1 philosophy 0 1 1 <Philosophy> 1 1 0.955 6 Named-entity recognition (NER) identifies nouns in a given text. ● Finding and categorizing entities in a given text. [1][2] ● (Proper) nouns refer to entities. ● Problem: Words are ambiguous. [1] ○ Same word, different entities within a category—”John F. Kennedy”, the president or his son? ○ Same word, different entities in different categories—”JFK”, person or airport? 7 Entity linking (EL) links entities to a database. ● Resolve ambiguity by associating mentions to entities in a knowledge base.
    [Show full text]
  • KORE 50^DYWC: an Evaluation Data Set for Entity Linking Based On
    Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020), pages 2389–2395 Marseille, 11–16 May 2020 c European Language Resources Association (ELRA), licensed under CC-BY-NC KORE 50DYWC: An Evaluation Data Set for Entity Linking Based on DBpedia, YAGO, Wikidata, and Crunchbase Kristian Noullet, Rico Mix, Michael Farber¨ Karlsruhe Institute of Technology (KIT) [email protected], [email protected], [email protected] Abstract A major domain of research in natural language processing is named entity recognition and disambiguation (NERD). One of the main ways of attempting to achieve this goal is through use of Semantic Web technologies and its structured data formats. Due to the nature of structured data, information can be extracted more easily, therewith allowing for the creation of knowledge graphs. In order to properly evaluate a NERD system, gold standard data sets are required. A plethora of different evaluation data sets exists, mostly relying on either Wikipedia or DBpedia. Therefore, we have extended a widely-used gold standard data set, KORE 50, to not only accommodate NERD tasks for DBpedia, but also for YAGO, Wikidata and Crunchbase. As such, our data set, KORE 50DYWC, allows for a broader spectrum of evaluation. Among others, the knowledge graph agnosticity of NERD systems may be evaluated which, to the best of our knowledge, was not possible until now for this number of knowledge graphs. Keywords: Data Sets, Entity Linking, Knowledge Graph, Text Annotation, NLP Interchange Format 1. Introduction Therefore, annotation methods can only be evaluated Automatic extraction of semantic information from texts is on a small part of the KG concerning a specific do- an open problem in natural language processing (NLP).
    [Show full text]
  • Entity Linking with a Knowledge Base: Issues, Techniques, and Solutions Wei Shen, Jianyong Wang, Senior Member, IEEE, and Jiawei Han, Fellow, IEEE
    1 Entity Linking with a Knowledge Base: Issues, Techniques, and Solutions Wei Shen, Jianyong Wang, Senior Member, IEEE, and Jiawei Han, Fellow, IEEE Abstract—The large number of potential applications from bridging Web data with knowledge bases have led to an increase in the entity linking research. Entity linking is the task to link entity mentions in text with their corresponding entities in a knowledge base. Potential applications include information extraction, information retrieval, and knowledge base population. However, this task is challenging due to name variations and entity ambiguity. In this survey, we present a thorough overview and analysis of the main approaches to entity linking, and discuss various applications, the evaluation of entity linking systems, and future directions. Index Terms—Entity linking, entity disambiguation, knowledge base F 1 INTRODUCTION Entity linking can facilitate many different tasks such as knowledge base population, question an- 1.1 Motivation swering, and information integration. As the world HE amount of Web data has increased exponen- evolves, new facts are generated and digitally ex- T tially and the Web has become one of the largest pressed on the Web. Therefore, enriching existing data repositories in the world in recent years. Plenty knowledge bases using new facts becomes increas- of data on the Web is in the form of natural language. ingly important. However, inserting newly extracted However, natural language is highly ambiguous, es- knowledge derived from the information extraction pecially with respect to the frequent occurrences of system into an existing knowledge base inevitably named entities. A named entity may have multiple needs a system to map an entity mention associated names and a name could denote several different with the extracted knowledge to the corresponding named entities.
    [Show full text]
  • Capturing Semantic Similarity for Entity Linking with Convolutional Neural Networks
    Capturing Semantic Similarity for Entity Linking with Convolutional Neural Networks Matthew Francis-Landau, Greg Durrett and Dan Klein Computer Science Division University of California, Berkeley mfl,gdurrett,klein @cs.berkeley.edu { } Abstract al., 2011), but such heuristics are hard to calibrate and they capture structure in a coarser way than A key challenge in entity linking is making ef- learning-based methods. fective use of contextual information to dis- In this work, we model semantic similarity be- ambiguate mentions that might refer to differ- tween a mention’s source document context and its ent entities in different contexts. We present a model that uses convolutional neural net- potential entity targets using convolutional neural works to capture semantic correspondence be- networks (CNNs). CNNs have been shown to be ef- tween a mention’s context and a proposed tar- fective for sentence classification tasks (Kalchbren- get entity. These convolutional networks oper- ner et al., 2014; Kim, 2014; Iyyer et al., 2015) and ate at multiple granularities to exploit various for capturing similarity in models for entity linking kinds of topic information, and their rich pa- (Sun et al., 2015) and other related tasks (Dong et rameterization gives them the capacity to learn al., 2015; Shen et al., 2014), so we expect them to be which n-grams characterize different topics. effective at isolating the relevant topic semantics for We combine these networks with a sparse lin- ear model to achieve state-of-the-art perfor- entity linking. We show that convolutions over mul- mance on multiple entity linking datasets, out- tiple granularities of the input document are useful performing the prior systems of Durrett and for providing different notions of semantic context.
    [Show full text]
  • Satellitener: an Effective Named Entity Recognition Model for the Satellite Domain
    SatelliteNER: An Effective Named Entity Recognition Model for the Satellite Domain Omid Jafari1 a, Parth Nagarkar1 b, Bhagwan Thatte2 and Carl Ingram3 1Computer Science Department, New Mexico State University, Las Cruces, NM, U.S.A. 2Protos Software, Tempe, AZ, U.S.A. 3Vigilant Technologies, Tempe, AZ, U.S.A. Keywords: Natural Language Processing, Named Entity Recognition, Spacy. Abstract: Named Entity Recognition (NER) is an important task that detects special type of entities in a given text. Existing NER techniques are optimized to find commonly used entities such as person or organization names. They are not specifically designed to find custom entities. In this paper, we present an end-to-end framework, called SatelliteNER, that its objective is to specifically find entities in the Satellite domain. The workflow of our proposed framework can be further generalized to different domains. The design of SatelliteNER in- cludes effective modules for preprocessing, auto-labeling and collection of training data. We present a detailed analysis and show that the performance of SatelliteNER is superior to the state-of-the-art NER techniques for detecting entities in the Satellite domain. 1 INTRODUCTION with having NER tools for different human languages (e.g., English, French, Chinese, etc.), domain-specific Nowadays, large amounts of data is generated daily. NER software and tools have been created with the Textual data is generated by news articles, social me- consideration that all sciences and applications have a dia such as Twitter, Wikipedia, etc. Managing these growing need for NLP. For instance, NER is used in large data and extracting useful information from the medical field (Abacha and Zweigenbaum, 2011), them is an important task that can be achieved using for the legal documents (Dozier et al., 2010), for Natural Language Processing (NLP).
    [Show full text]
  • BENNERD: a Neural Named Entity Linking System for COVID-19
    BENNERD: A Neural Named Entity Linking System for COVID-19 Mohammad Golam Sohraby,∗, Khoa N. A. Duongy,∗, Makoto Miway, z , Goran Topic´y, Masami Ikeday, and Hiroya Takamuray yArtificial Intelligence Research Center (AIRC) National Institute of Advanced Industrial Science and Technology (AIST), Japan zToyota Technological Institute, Japan fsohrab.mohammad, khoa.duong, [email protected], fikeda-masami, [email protected], [email protected] Abstract in text mining system, Xuan et al.(2020b) created CORD-NER dataset with comprehensive NE an- We present a biomedical entity linking (EL) system BENNERD that detects named enti- notations. The annotations are based on distant ties in text and links them to the unified or weak supervision. The dataset includes 29,500 medical language system (UMLS) knowledge documents from the CORD-19 corpus. The CORD- base (KB) entries to facilitate the corona virus NER dataset gives a shed on NER, but they do not disease 2019 (COVID-19) research. BEN- address linking task which is important to address NERD mainly covers biomedical domain, es- COVID-19 research. For example, in the example pecially new entity types (e.g., coronavirus, vi- sentence in Figure1, the mention SARS-CoV-2 ral proteins, immune responses) by address- ing CORD-NER dataset. It includes several needs to be disambiguated. Since the term SARS- NLP tools to process biomedical texts includ- CoV-2 in this sentence refers to a virus, it should ing tokenization, flat and nested entity recog- be linked to an entry of a virus in the knowledge nition, and candidate generation and rank- base, not to an entry of ‘SARS-CoV-2 vaccination’, ing for EL that have been pre-trained using which corresponds to therapeutic or preventive pro- the CORD-NER corpus.
    [Show full text]