Syntactic Analyses and Named Entity Recognition for Pubmed and Pubmed Central — Up-To-The-Minute
Total Page:16
File Type:pdf, Size:1020Kb
Syntactic analyses and named entity recognition for PubMed and PubMed Central — up-to-the-minute 1,2 1,2,3 1,3 1 Kai Hakala ∗, Suwisa Kaewphan ∗, Tapio Salakoski and Filip Ginter 1. Dept. of Information Technology, University of Turku, Finland 2. The University of Turku Graduate School (UTUGS), University of Turku, Finland 3. Turku Centre for Computer Science (TUCS), Finland [email protected], [email protected], [email protected], [email protected] Abstract Various community efforts, mainly in the form of shared tasks, have resulted in steady improve- Although advanced text mining methods ment in biomedical text mining methods (Kim et specifically adapted to the biomedical do- al., 2009; Segura Bedmar et al., 2013). For in- main are continuously being developed, stance the GENIA shared tasks focusing on ex- their applications on large scale have been tracting biological events, such as gene regula- scarce. One of the main reasons for this tions, have consistently gathered wide interest and is the lack of computational resources and have led to the development of several text mining workforce required for processing large tools (Miwa et al., 2012; Bjorne¨ and Salakoski, text corpora. 2013). These methods have been also succes- In this paper we present a publicly avail- fully applied on a large scale and several biomed- able resource distributing preprocessed ical text mining databases are publicly available biomedical literature including sentence (Van Landeghem et al., 2013a; Franceschini et al., splitting, tokenization, part-of-speech tag- 2013; Muller¨ et al., 2004). Although these re- ging, syntactic parses and named entity sources exist, their number does not reflect the recognition. The aim of this work is to vast amount of fundamental research invested in support the future development of large- the underlying methods, mainly due to the non- scale text mining resources by eliminating trivial amount of manual labor and computational the time consuming but necessary prepro- resources required to process large quantities of cessing steps. textual data. Another issue arising from the chal- This resource covers the whole of PubMed lenging text preprocessing is the lack of mainte- and PubMed Central Open Access sec- nance of the existing databases which in effect tion, currently containing 26M abstracts nullifies the purpose of text mining as these re- and 1.4M full articles, constituting over sources tend to be almost as much out-of-date as 388M analyzed sentences. The re- their manually curated counterparts. According to source is based on a fully automated MEDLINE statistics1 806,326 new articles were pipeline, guaranteeing that the distributed indexed during 2015 and thus a text mining re- data is always up-to-date. The resource source will miss on average 67 thousand articles is available at https://turkunlp. each month it hasn’t been updated. github.io/pubmed_parses/. In this paper we present a resource aiming to support the development and maintenance of 1 Introduction large-scale biomedical text mining. The resource Due to the rapid growth of biomedical literature, includes all PubMed abstracts as well as full ar- the maintenance of manually curated databases, ticles from the open access section of PubMed usually updated following new discoveries pub- Central (PMCOA), with the fundamental lan- lished in articles, has become unfeasible. This guage technology building blocks, such as part-of- has led to a significant interest in developing au- speech (POS) tagging and syntactic parses, readily tomated text mining methods specifically for the available. In addition, recognition of several bio- biomedical domain. 1https://www.nlm.nih.gov/bsd/bsd_key. ∗These authors contributed equally. html 102 Proceedings of the 15th Workshop on Biomedical Natural Language Processing, pages 102–107, Berlin, Germany, August 12, 2016. c 2016 Association for Computational Linguistics logically relevant named entities, such as proteins As the PMCOA does not provide incremental up- and chemicals is included. Hence we hope that dates, we use the index file and compare it to the this resource eliminates the need of the tedious previous file list to select new articles for process- preprocessing involved in utilizing the PubMed ing. data and allows swifter development of new infor- Even though the PubMed and PMCOA docu- mation extraction databases. ments are provided in slightly different XML for- The resource is constructed with an automated mats, they can be processed in similar fashion. As pipeline which provides weekly updates with the a result, the rest of the pipeline discussed in this latest articles indexed in PubMed and PubMed section is applied to both document types. Central, ensuring the timeliness of the distributed Both PubMed XML articles and PMCOA data. All the data is downloadable in an easily NXML full texts are preprocessed using publicly handleable XML format, also used by the widely available tools2 (Pyysalo et al., 2013). These tools adapted event extraction system TEES (Bjorne¨ convert XML documents to plain text and change and Salakoski, 2015). A detailed description of character encoding from UTF-8 to ASCII as many this format is available on the website. of the legacy language processing tools are inca- pable of handling non-ASCII characters. Addi- 2 Data tionally, all excess meta data is removed, leaving We use all publicly available literature from titles, abstracts and full-text contents for further PubMed and PubMed Central Open Access sub- processing. These documents are subsequently set, which cover most of the relevant literature and split into sentences using GENIA sentence split- are commonly used as the prime source of data in ter (Sætre et al., 2007) as most linguistic analyses biomedical text mining knowledge bases. are done on the sentence level. GENIA sentence PubMed provides titles and abstracts in XML splitter is trained on biomedical text (GENIA cor- format in a collection of baseline release and sub- pus) and has state-of-the-art performance on this sequent updates. The former is available at the end domain. of each year whereas the latter is updated daily. The whole data is parsed with the BLLIP con- As this project was started during 2015, we have stituent parser (Charniak and Johnson, 2005), us- first processed the baseline release from the end ing a model adapted for the biomedical domain of 2014 and this data has then been extended with (McClosky, 2010), as provided in the TEES pro- the new publications from the end of 2015 base- cessing pipeline. The distributed tokenization and line release. The rest of the data up to date has POS tagging are also produced with the parser been collected from the daily updates. pipeline. We chose to use this tool as the perfor- The full articles in PMC Open Access subset mance of the TEES software has been previously (PMCOA) are retrieved via the PMC FTP service. evaluated on a large-scale together with this pars- Multiple types of data format are provided in PM- ing pipeline (Van Landeghem et al., 2013b) and it COA, including NXML and TXT formats which should be a reliable choice for biomedical relation are suitable for text processing. We use the pro- extraction. Since dependency parsing has become vided NXML format as it is compatible with our the prevalent approach in modeling syntactic rela- processing pipeline. This service does not provide tions, we also provide conversions to the collapsed distinct incremental updates, but a list of all in- Stanford dependency scheme (De Marneffe et al., dexed articles updated weekly. 2006). The pipeline is run in parallel on a cluster com- 3 Processing Pipeline puter with the input data divided into smaller batches. The size of these batches is altered along In this section, we discuss our processing pipeline the pipeline to adapt to the varying computational as shown in Figure 1. Firstly, both PubMed and requirements of the different tools. PMCOA documents are downloaded from NCBI FTP services. For the periodical updates of our 3.1 Named Entity Recognition resource this is done weekly — the same inter- Named entity recognition (NER) is one of the fun- val the official PMCOA dataset is updated. From damental tasks in BioNLP as most of the cru- the PubMed incremental updates we only include newly added documents and ignore other updates. 2https://github.com/spyysalo/nxml2txt 103 Entity type Our system State-of-the-art system References Precision/Recall/F-score Precision/Recall/F-score Cell line 89.88 / 84.36 / 87.03 91.67 / 85.47 / 88.46 (Kaewphan et al., 2016) Chemical 85.27 / 82.92 / 84.08 89.09 / 85.75 / 87.39 (Leaman et al., 2015) Disease* 86.32 / 80.83 / 83.49 82.80 / 81.90 / 80.90 (Leaman et al., 2013) GGP** 74.27 / 72.99 / 73.62 90.22 / 84.82 / 87.17 (Campos et al., 2013) Organism 77.15 / 80.15 / 78.63 83.90 / 72.60 / 77.80 (Pafilis et al., 2013) Table 1: Evaluation of the named entity recognition for each entity type on the test sets, measured with strict entity level metrics. Reported results for corresponding state-of-the-art approaches are shown for comparison. * The evaluation of the best performing system for disease mentions is the combination of named entity recognition and normalization. ** The official BioCreative II evaluation for our GGP model results in 84.67, 84.54 and 84.60 for preci- sion, recall and F-score respectively. These numbers are comparable to the listed state-of-the-art method. cial biological information is expressed as rela- SPECIES corpus being an exception. For this cor- tions among entities such as genes and proteins. pus we do our own data division with random sam- To support further development on this dataset, we pling on document level, for each taxonomy cate- provide named entity tagging for five entity types, gory separately.