CPSC 540: Machine Learning Deep Crfs, Convolutional Neural Networks

Total Page:16

File Type:pdf, Size:1020Kb

CPSC 540: Machine Learning Deep Crfs, Convolutional Neural Networks Neural Networks Review Deep Conditional Random Fields Convolutional Neural Network CPSC 540: Machine Learning Deep CRFs, Convolutional Neural Networks Mark Schmidt University of British Columbia Winter 2017 Neural Networks Review Deep Conditional Random Fields Convolutional Neural Network Admin Assignment 4: 1 late day to hand in tonight, 2 for Monday. Assignment 5 coming soon. Project description coming soon. Final details coming soon. Bonus lecture on April 10th (same time and place)? Neural Networks Review Deep Conditional Random Fields Convolutional Neural Network Last Time: Log-Linear Models We discussed log-linear models like the Ising model, d 1 p(x ,x ,...,x )= exp x w + x x v , 1 2 d Z 0 i i j 1 i=1 (i,j) E X X2 @ A where if v is large it encourages neighbouring values to be the same. We also discussed CRF variants that generalize logistic regression, d 1 p(y1,y2,...,yd x1,x2,...,xd)= exp yiwT xi + yiyjv . | Z 0 1 i=1 (i,j) E X X2 @ A We can jointly learn a logistic regression model and the label depencies. Neural Networks Review Deep Conditional Random Fields Convolutional Neural Network Last Time: Structured SVMs In generative UGM models (MRFs) we optimize the joint likelihood, n f(w)= log p(yi,xi w). − | Xi=1 In discriminative UGM models (CRFs) we optimize the conditional likelihood, n f(w)= log p(yi xi,w). − | Xi=1 In structured SVMs we penalize decision from decoding conditional likelihood, n i i i i f(w)= max[g(y ,y0) log p(y x ,w) + log p(y0 x ,w)], y0 − | | Xi=1 i i where g(y ,y0) is penalty for predicting y0 when true label is y . Only requires decoding to evaluate objective/subgradient. Neural Networks Review Deep Conditional Random Fields Convolutional Neural Network Outline 1 Neural Networks Review 2 Deep Conditional Random Fields 3 Convolutional Neural Network Neural Networks Review Deep Conditional Random Fields Convolutional Neural Network Feedforward Neural Networks In 340 we discussed feedforward neural networks for supervised learning. With 1 hidden layer the classic model has this structure: Motivation: For some problems it’s hard to find good features. This learn features z that are good for supervised learning. Neural Networks Review Deep Conditional Random Fields Convolutional Neural Network Neural Networks as DAG Models It’s a DAG model but there is an important di↵erence with our previous models: The latent variables zc are deterministic functions of the xj. Makes inference trivial: if you observe all xj you also observe all zc. In this case y is the only random variable. Neural Networks Review Deep Conditional Random Fields Convolutional Neural Network Neural Network Notation We’ll continue using our supervised learning notation: (x1)T y1 (x2)T y2 X = 2 . 3 ,y= 2 . 3 , . 6 7 6 7 6 (xn)T 7 6yn7 6 7 6 7 4 5 4 5 For the latent features and two sets of parameters we’ll use 1 T (z ) w1 W1 2 T (z ) w2 W2 Z = 2 . 3 ,w= 2 . 3 ,W= 2 . 3 , . 6 7 6 7 6 7 6 (zn)T 7 6w 7 6 W 7 6 7 6 k7 6 k 7 4 5 4 5 4 5 where Z is n by k and W is k by d. Neural Networks Review Deep Conditional Random Fields Convolutional Neural Network Introducing Non-Linearity We discussed how the “linear-linear” model, zi = Wxi,yi = wT zi, is degenerate since it’s still a linear model. The classic solution is to introduce a non-linearity, zi = h(Wxi),yi = wT zi, where a common-choice is applying sigmoid element-wise, 1 zi = , c 1+exp( W xi) − c which is said to be the “activation” of neuron c on example i. A universal approximator with k large enough. Neural Networks Review Deep Conditional Random Fields Convolutional Neural Network Deep Neural Networks In deep neural networks we add multiple hidden layers, Mathematically, with 3 hidden layers the classic model uses yi = wT h(W 3 h(W 2 h(W 1xi))) . zi1 |zi2 {z } | zi3 {z } | {z } Neural Networks Review Deep Conditional Random Fields Convolutional Neural Network Biological Motivation Deep learning is motivated by theories of deep hierarchies in the brain. https://en.wikibooks.org/wiki/Sensory_Systems/Visual_Signal_Processing But most research is about making models work better, not be more brain-like. Neural Networks Review Deep Conditional Random Fields Convolutional Neural Network Deep Neural Network History Popularity of deep learning has come in waves over the years. Currently, it is one of the hottest topics in science. Recent popularity is due to unprecedented performance on some difficult tasks: Speech recognition. Comptuer vision. Machine translation. These are mainly due to big datasets, deep models,andtons of computation. Plus some tweaks to the classic models. For a NY Times article discussing some of the history/successes/issues, see: https://mobile.nytimes.com/2016/12/14/magazine/the-great-ai-awakening.html Neural Networks Review Deep Conditional Random Fields Convolutional Neural Network Training Deep Neural Networks If we’re training a 3-layer network with squared error, our objective is n 1 f(w, W 1,W2,W3)= (wT h(W 3h(W 2h(W 1xi))) yi)2. 2 − Xi=1 Usual training procedure is stochastic gradient. Highly non-convex and notoriously difficult to tune. Recent empirical/theoretical work indicates non-convexity may not be an issue: All local minima may be good for “large enough” networks. We’re discovering sets of tricks to make things easier to tune. Neural Networks Review Deep Conditional Random Fields Convolutional Neural Network Training Deep Neural Networks Some common data/optimization tricks we discussed in 340: Data transformations. For images, translate/rotate/scale/crop each xi to make more data. Data standardization: centering and whitening. Adding bias variables. For sigmoids, corresponds to adding row of zeroes to each W m. Parameter initalization: “small but di↵erent”, standardizing within layers. Step-size selection: “babysitting”, Bottou trick, Adam, momentum. Momentum: heavy-ball and Nesterov-style modifications. Batch normalization:adaptivestandardizingwithinlayers. i + ReLU: replacing sigmoid with [Wcx ] . Avoids gradients extremely-close to zero. Neural Networks Review Deep Conditional Random Fields Convolutional Neural Network Training Deep Neural Networks Some common regularization tricks we discussed in 340: Standard L2-regularization or L1-regularization “weight decay”. Sometimes with di↵erent λ for each layer. Early stopping of the optimization based on validation accuracy. Dropout randomly zeroess z values to discourage dependence. Hyper-parameter optimization to choose various tuning parameters. Special architectures like convolutional neural networks and LSTMs (later). Yields W m that are very sparse and have many tied parameters. Neural Networks Review Deep Conditional Random Fields Convolutional Neural Network Backpropagation as Message-Passing Computing the gradient in neural networks is called backpropagation. Derived from the chain rule and memoization of repeated quantities. We’re going to view backpropagation as a message-passing algorithm. Key advantages of this view: It’s easy to handle di↵erent graph structures. It’s easy to handle di↵erent non-linear transformations. It’s easy to handle multiple outputs (as in structured prediction). It’s easy to add non-deterministic parts and combine with other graphical models. Neural Networks Review Deep Conditional Random Fields Convolutional Neural Network Backpropagation Forward Pass Consider computing the output of a neural network for an example i, yi = wT h(W 3h(W 2h(W 1xi))) k k k d 3 2 1 i = wch W h W h W x . 0 c0c 0 c00c0 0 c00j j111 c=1 X cX0=1 cX00=1 Xj=1 @ @ @ AAA where we’ve assume that all hidden layers have k values. In the second line, the h functions are single-input single-output. The nested sum structure is similar to our message-passing structures. However, it’s easier because it’s deterministic: no random variables to sum over. The messages will be scalars rather than functions. Neural Networks Review Deep Conditional Random Fields Convolutional Neural Network Backpropagation Forward Pass Forward propagation through neural network as message passing: k k k d i 3 2 1 i y = wch W h W h W x 0 c0c 0 c00c0 0 c00j j111 c=1 X cX0=1 cX00=1 Xj=1 k @ k @ k @ AAA = w h W 3 h W 2 h(M ) c c0c c00c0 c00 c=1 !! X cX0=1 cX00=1 k k = w h W 3 h(M ) c c0c c0 c=1 ! X cX0=1 k = wch(Mc) c=1 X = My, i2 where intermediate messages are the z before transformation, z = h(M2(c0)). c0 Neural Networks Review Deep Conditional Random Fields Convolutional Neural Network Backpropagation Backward Pass The backpropagation backward pass computes the partial derivatives. For a loss f,thepartialderivativesinthelastlayerhavetheform @f i3 T 3 2 1 i = zc f 0(w h(W h(W h(W x )))), @wc where k k d zi3 = h W 3 h W 2 h W 1 xi . c0 0 c0c 0 c00c0 0 c00j j111 j=1 cX0=1 cX00=1 X Written in terms of messages@ it simplifies@ to @ AAA @f = h(Mc)f 0(My). @wc Neural Networks Review Deep Conditional Random Fields Convolutional Neural Network Backpropagation Backward Pass In terms of forward messages, the partial derivatives have the forms: @f = h(Mc)f 0(My), @wc @f = h(M )h0(M )w f 0(M ), @W3 c0 c c y c0c k @f 3 = h(Mc )h0(Mc ) W h0(Mc)wcf 0(My), @W2 00 0 c0c c00c0 c=1 X k k @f 2 3 = h(Mj)h0(Mc ) W h0(Mc ) W h0(Mc)wcf 0(My), @W1 00 c00c0 0 c0c jc00 c=1 cX0=1 X which are ugly but notice all the repeated calculations.
Recommended publications
  • Information Extraction from Electronic Medical Records Using Multitask Recurrent Neural Network with Contextual Word Embedding
    applied sciences Article Information Extraction from Electronic Medical Records Using Multitask Recurrent Neural Network with Contextual Word Embedding Jianliang Yang 1 , Yuenan Liu 1, Minghui Qian 1,*, Chenghua Guan 2 and Xiangfei Yuan 2 1 School of Information Resource Management, Renmin University of China, 59 Zhongguancun Avenue, Beijing 100872, China 2 School of Economics and Resource Management, Beijing Normal University, 19 Xinjiekou Outer Street, Beijing 100875, China * Correspondence: [email protected]; Tel.: +86-139-1031-3638 Received: 13 August 2019; Accepted: 26 August 2019; Published: 4 September 2019 Abstract: Clinical named entity recognition is an essential task for humans to analyze large-scale electronic medical records efficiently. Traditional rule-based solutions need considerable human effort to build rules and dictionaries; machine learning-based solutions need laborious feature engineering. For the moment, deep learning solutions like Long Short-term Memory with Conditional Random Field (LSTM–CRF) achieved considerable performance in many datasets. In this paper, we developed a multitask attention-based bidirectional LSTM–CRF (Att-biLSTM–CRF) model with pretrained Embeddings from Language Models (ELMo) in order to achieve better performance. In the multitask system, an additional task named entity discovery was designed to enhance the model’s perception of unknown entities. Experiments were conducted on the 2010 Informatics for Integrating Biology & the Bedside/Veterans Affairs (I2B2/VA) dataset. Experimental results show that our model outperforms the state-of-the-art solution both on the single model and ensemble model. Our work proposes an approach to improve the recall in the clinical named entity recognition task based on the multitask mechanism.
    [Show full text]
  • A Multitask Bi-Directional RNN Model for Named Entity Recognition On
    Chowdhury et al. BMC Bioinformatics 2018, 19(Suppl 17):499 https://doi.org/10.1186/s12859-018-2467-9 RESEARCH Open Access A multitask bi-directional RNN model for named entity recognition on Chinese electronic medical records Shanta Chowdhury1,XishuangDong1,LijunQian1, Xiangfang Li1*, Yi Guan2, Jinfeng Yang3 and Qiubin Yu4 From The International Conference on Intelligent Biology and Medicine (ICIBM) 2018 Los Angeles, CA, USA. 10-12 June 2018 Abstract Background: Electronic Medical Record (EMR) comprises patients’ medical information gathered by medical stuff for providing better health care. Named Entity Recognition (NER) is a sub-field of information extraction aimed at identifying specific entity terms such as disease, test, symptom, genes etc. NER can be a relief for healthcare providers and medical specialists to extract useful information automatically and avoid unnecessary and unrelated information in EMR. However, limited resources of available EMR pose a great challenge for mining entity terms. Therefore, a multitask bi-directional RNN model is proposed here as a potential solution of data augmentation to enhance NER performance with limited data. Methods: A multitask bi-directional RNN model is proposed for extracting entity terms from Chinese EMR. The proposed model can be divided into a shared layer and a task specific layer. Firstly, vector representation of each word is obtained as a concatenation of word embedding and character embedding. Then Bi-directional RNN is used to extract context information from sentence. After that, all these layers are shared by two different task layers, namely the parts-of-speech tagging task layer and the named entity recognition task layer.
    [Show full text]
  • ML Cheatsheet Documentation
    ML Cheatsheet Documentation Team Sep 02, 2021 Basics 1 Linear Regression 3 2 Gradient Descent 21 3 Logistic Regression 25 4 Glossary 39 5 Calculus 45 6 Linear Algebra 57 7 Probability 67 8 Statistics 69 9 Notation 71 10 Concepts 75 11 Forwardpropagation 81 12 Backpropagation 91 13 Activation Functions 97 14 Layers 105 15 Loss Functions 117 16 Optimizers 121 17 Regularization 127 18 Architectures 137 19 Classification Algorithms 151 20 Clustering Algorithms 157 i 21 Regression Algorithms 159 22 Reinforcement Learning 161 23 Datasets 165 24 Libraries 181 25 Papers 211 26 Other Content 217 27 Contribute 223 ii ML Cheatsheet Documentation Brief visual explanations of machine learning concepts with diagrams, code examples and links to resources for learning more. Warning: This document is under early stage development. If you find errors, please raise an issue or contribute a better definition! Basics 1 ML Cheatsheet Documentation 2 Basics CHAPTER 1 Linear Regression • Introduction • Simple regression – Making predictions – Cost function – Gradient descent – Training – Model evaluation – Summary • Multivariable regression – Growing complexity – Normalization – Making predictions – Initialize weights – Cost function – Gradient descent – Simplifying with matrices – Bias term – Model evaluation 3 ML Cheatsheet Documentation 1.1 Introduction Linear Regression is a supervised machine learning algorithm where the predicted output is continuous and has a constant slope. It’s used to predict values within a continuous range, (e.g. sales, price) rather than trying to classify them into categories (e.g. cat, dog). There are two main types: Simple regression Simple linear regression uses traditional slope-intercept form, where m and b are the variables our algorithm will try to “learn” to produce the most accurate predictions.
    [Show full text]
  • CRF, CTC, Word2vec
    NPFL114, Lecture 9 CRF, CTC, Word2Vec Milan Straka April 26, 2021 Charles University in Prague Faculty of Mathematics and Physics Institute of Formal and Applied Linguistics unless otherwise stated Structured Prediction Structured Prediction NPFL114, Lecture 9 CRF CTC CTCDecoding Word2Vec CLEs Subword Embeddings 2/30 Structured Prediction Consider generating a sequence of N given input . y1, … , yN ∈ Y x1, … , xN Predicting each sequence element independently models the distribution . P (yi∣ X) However, there may be dependencies among the themselves, which is difficult to capture by yi independent element classification. NPFL114, Lecture 9 CRF CTC CTCDecoding Word2Vec CLEs Subword Embeddings 3/30 Maximum Entropy Markov Models We might model the dependencies by assuming that the output sequence is a Markov chain, and model it as P (yi∣ X, yi−1) . Each label would be predicted by a softmax from the hidden state and the previous label. The decoding can be then performed by a dynamic programming algorithm. NPFL114, Lecture 9 CRF CTC CTCDecoding Word2Vec CLEs Subword Embeddings 4/30 Maximum Entropy Markov Models However, MEMMs suffer from a so-called label bias problem. Because the probability is factorized, each is a distribution and must sum to one. P (yi∣ X, yi−1) Imagine there was a label error during prediction. In the next step, the model might “realize” that the previous label has very low probability of being followed by any label – however, it cannot express this by setting the probability of all following labels low, it has to “conserve the mass”. NPFL114, Lecture 9 CRF CTC CTCDecoding Word2Vec CLEs Subword Embeddings 5/30 Conditional Random Fields Let G = (V , E) be a graph such that y is indexed by vertices of G.
    [Show full text]
  • PDF Hosted at the Radboud Repository of the Radboud University Nijmegen
    PDF hosted at the Radboud Repository of the Radboud University Nijmegen The following full text is a publisher's version. For additional information about this publication click this link. http://hdl.handle.net/2066/179506 Please be advised that this information was generated on 2021-09-30 and may be subject to change. Algorithmic composition of polyphonic music with the WaveCRF Umut Güçlü*, Yagmur˘ Güçlütürk*, Luca Ambrogioni, Eric Maris, Rob van Lier, Marcel van Gerven, Radboud University, Donders Institute for Brain, Cognition and Behaviour Nijmegen, the Netherlands {u.guclu, y.gucluturk}@donders.ru.nl *Equal contribution Abstract Here, we propose a new approach for modeling conditional probability distributions of polyphonic music by combining WaveNET and CRF-RNN variants, and show that this approach beats LSTM and WaveNET baselines that do not take into account the statistical dependencies between simultaneous notes. 1 Introduction The history of algorithmically generated music dates back to the musikalisches würfelspiel (musical dice game) of the eighteenth century. These early methods share with modern machine learning techniques the emphasis on stochasticity as a crucial ingredient for “machine creativity”. In recent years, artificial neural networks dominated the creative music generation literature [3]. In several of the most successful architectures, such as WaveNet, the output of the network is an array that specifies the probabilities of each note given the already produced composition [3]. This approach tends to be infeasible in the case of polyphonic music because the network would need to output a probability value for every possible simultaneous combination of notes. The naive strategy of producing each note independently produces less realistic compositions as it ignores the statistical dependencies between simultaneous notes.
    [Show full text]
  • Conditional Random Fields Meet Deep Neural Networks for Semantic
    IEEE SIGNAL PROCESSING MAGAZINE, VOL. XX, NO. XX, JANUARY 2018 1 Conditional Random Fields Meet Deep Neural Networks for Semantic Segmentation Anurag Arnab∗, Shuai Zheng∗, Sadeep Jayasumana, Bernardino Romera-Paredes, Mans˚ Larsson, Alexander Kirillov, Bogdan Savchynskyy, Carsten Rother, Fredrik Kahl and Philip Torr Abstract—Semantic Segmentation is the task of labelling every object category. Semantic Segmentation, the main focus of this pixel in an image with a pre-defined object category. It has numer- article, aims for a more precise understanding of the scene by ous applications in scenarios where the detailed understanding assigning an object category label to each pixel within the of an image is required, such as in autonomous vehicles and image. Recently, researchers have also begun tackling new medical diagnosis. This problem has traditionally been solved scene understanding problems such as instance segmentation, with probabilistic models known as Conditional Random Fields which aims to assign a unique identifier to each segmented (CRFs) due to their ability to model the relationships between the pixels being predicted. However, Deep Neural Networks object in the image, as well as bridging the gap between natural (DNNs) have recently been shown to excel at a wide range of language processing and computer vision with tasks such as computer vision problems due to their ability to learn rich feature image captioning and visual question answering, which aim at representations automatically from data, as opposed to traditional describing an image in words, and answering textual questions hand-crafted features. The idea of combining CRFs and DNNs from images respectively. have achieved state-of-the-art results in a number of domains.
    [Show full text]
  • Evaluating the Utility of Hand-Crafted Features in Sequence Labelling∗
    Evaluating the Utility of Hand-crafted Features in Sequence Labelling∗ Minghao Wu♠♥y Fei Liu♠ Trevor Cohn♠ ♠The University of Melbourne, Victoria, Australia ~JD AI Research, Beijing, China [email protected] [email protected] [email protected] Abstract Orthogonal to the advances in deep learning is the effort spent on feature engineering. A rep- Conventional wisdom is that hand-crafted fea- resentative example is the task of named entity tures are redundant for deep learning mod- recognition (NER), one that requires both lexi- els, as they already learn adequate represen- cal and syntactic knowledge, where, until recently, tations of text automatically from corpora. In this work, we test this claim by proposing a most models heavily rely on statistical sequential new method for exploiting handcrafted fea- labelling models taking in manually engineered tures as part of a novel hybrid learning ap- features (Florian et al., 2003; Chieu and Ng, 2002; proach, incorporating a feature auto-encoder Ando and Zhang, 2005). Typical features include loss component. We evaluate on the task of POS and chunk tags, prefixes and suffixes, and ex- named entity recognition (NER), where we ternal gazetteers, all of which represent years of show that including manual features for part- accumulated knowledge in the field of computa- of-speech, word shapes and gazetteers can im- tional linguistics. prove the performance of a neural CRF model. The work of Collobert et al.(2011) started We obtain a F1 of 91.89 for the CoNLL-2003 English shared task, which significantly out- the trend of feature engineering-free modelling performs a collection of highly competitive by learning internal representations of compo- baseline models.
    [Show full text]
  • Deep Recurrent Conditional Random Field Network for Protein Secondary Prediction
    Deep Recurrent Conditional Random Field Network for Protein Secondary Prediction Johansen, Alexander Rosenberg; Sønderby, Casper Kaae; Sønderby, Søren Kaae; Winther, Ole Published in: ACM-BCB '17 DOI: 10.1145/3107411.3107489 Publication date: 2017 Citation for published version (APA): Johansen, A. R., Sønderby, C. K., Sønderby, S. K., & Winther, O. (2017). Deep Recurrent Conditional Random Field Network for Protein Secondary Prediction. In ACM-BCB '17: Proceedings of the 8th ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics (pp. 73-78) https://doi.org/10.1145/3107411.3107489 Download date: 28. Sep. 2021 Session 3: Proteins and RNA Structure, Dynamics, and Analysis I ACM-BCB’17, August 20–23, 2017, Boston, MA, USA Deep Recurrent Conditional Random Field Network for Protein Secondary Prediction Alexander Rosenberg Johansen Casper Kaae Sønderby Technical University of Denmark, DTU Compute University of Copenhagen, Bioinformatics Centre [email protected] [email protected] Søren Kaae Sønderby Ole Winther University of Copenhagen, Bioinformatics Centre Technical University of Denmark, DTU Compute [email protected] [email protected] ABSTRACT methods [18] has emerged and received state of the art perfor- Deep learning has become the state-of-the-art method for predict- mance [6, 19, 26, 29, 31]. These approaches have arisen as the chal- ing protein secondary structure from only its amino acid residues lenge of annotating secondary protein structures is predictable from and sequence prole. Building upon these results, we propose to conformations along the sequence of the amino acid and sequence combine a bi-directional recurrent neural network (biRNN) with proles [29].
    [Show full text]
  • Conditional Random Fields by Charles Sutton and Andrew Mccallum
    Foundations and TrendsR in Machine Learning Vol. 4, No. 4 (2011) 267–373 c 2012 C. Sutton and A. McCallum DOI: 10.1561/2200000013 An Introduction to Conditional Random Fields By Charles Sutton and Andrew McCallum Contents 1 Introduction 268 1.1 Implementation Details 271 2 Modeling 272 2.1 Graphical Modeling 272 2.2 Generative versus Discriminative Models 278 2.3 Linear-chain CRFs 286 2.4 General CRFs 290 2.5 Feature Engineering 293 2.6 Examples 298 2.7 Applications of CRFs 306 2.8 Notes on Terminology 308 3 Overview of Algorithms 310 4 Inference 313 4.1 Linear-Chain CRFs 314 4.2 Inference in Graphical Models 318 4.3 Implementation Concerns 328 5 Parameter Estimation 331 5.1 Maximum Likelihood 332 5.2 Stochastic Gradient Methods 341 5.3 Parallelism 343 5.4 Approximate Training 343 5.5 Implementation Concerns 350 6 Related Work and Future Directions 352 6.1 Related Work 352 6.2 Frontier Areas 359 Acknowledgments 362 References 363 Foundations and TrendsR in Machine Learning Vol. 4, No. 4 (2011) 267–373 c 2012 C. Sutton and A. McCallum DOI: 10.1561/2200000013 An Introduction to Conditional Random Fields Charles Sutton1 and Andrew McCallum2 1 School of Informatics, University of Edinburgh, Edinburgh, EH8 9AB, UK, [email protected] 2 Department of Computer Science, University of Massachusetts, Amherst, MA, 01003, USA, [email protected] Abstract Many tasks involve predicting a large number of variables that depend on each other as well as on other observed variables. Structured prediction methods are essentially a combination of classification and graphical modeling.
    [Show full text]
  • Fast and Effective Biomedical Named Entity Recognition Using Temporal Convolutional Network with Conditional Random Field
    MBE, 17(4): 3553–3566. DOI: 10.3934/mbe.2020200 Received: 31 December 2019 Accepted: 20 April 2020 http://www.aimspress.com/journal/MBE Published: 12 May 2020 Research article Fast and effective biomedical named entity recognition using temporal convolutional network with conditional random field Chao Che1;*, Chengjie Zhou1, Hanyu Zhao1, Bo Jin2, and Zhan Gao3 1 Key Laboratory of Advanced Design and Intelligent Computing, Ministry of Education, Dalian University, Dalian 116622, China 2 School of Innovation and Entrepreneurship, Dalian University of Technology, Dalian 116024, China 3 Beijing Haoyisheng Cloud Hospital Management Technology Ltd. Beijing 100120, China * Correspondence: Tel: +86-041187402046; Email: [email protected]. Abstract: Biomedical named entity recognition (Bio-NER) is the prerequisite for mining knowledge from biomedical texts. The state-of-the-art models for Bio-NER are mostly based on bidirectional long short-term memory (BiLSTM) and bidirectional encoder representations from transformers (BERT) models. However, both BiLSTM and BERT models are extremely computationally intensive. To this end, this paper proposes a temporal convolutional network (TCN) with a conditional random field (TCN-CRF) layer for Bio-NER. The model uses TCN to extract features, which are then decoded by the CRF to obtain the final result. We improve the original TCN model by fusing the features extracted by convolution kernel with different sizes to enhance the performance of Bio-NER. We compared our model with five deep learning models on the GENIA and CoNLL-2003 datasets. The experimental results show that our model can achieve comparative performance with much less training time. The implemented code has been made available to the research community.
    [Show full text]
  • Thesis Submitted for the Requirements of The
    Robust Biomedical Name Entity Recognition Based on Deep Machine Learning Robert Phan A Thesis submitted for the requirements of the Degree of Doctor of Philosophy Faculty of Science and Technology & UC Health Research Institute 10 June 2020 Abstract Biomedical Named Entity Recognition (Bio-NER) is a complex natural language processing (NLP) task for extracting important concepts (named entities) from biomedical texts, such as RNA (Ribonucleic acid), protein, cell type, cell line, and DNA (Deoxyribonucleic acid), and attempts to discover automatically, biomedical knowledge and clinical concepts and terms from text-based digital health records. For automatic recognition and discovery, researchers have investigated extensively different types of machine learning models, leading to sophisticated NER systems. However, most of the computer based NER systems often require manual annotations, and handcrafted features specifically designed for each type of biomedical or clinical entities. The feature generation process for biomedical and clinical NLP texts, requires extensive manual efforts, significant background knowledge from biomedical, clinical and linguistic experts, and suffer from lack of generalization capabilities for different application contexts, and difficult to adapt to new biomedical entity types or the clinical concepts and terms. Recently, there have been increasing efforts to apply different machine learning models to improve the performance and generalization capabilities of automatic Named Entity Recognition (NER) systems for different application contexts. However, their performance and robustness for biomedical and clinical contexts are far from optimal, due to the complex nature of medical texts and lack of annotated dictionaries, and requirement of multi-disciplinary experts for annotating massive training data for the manual feature generation process.
    [Show full text]
  • Conditional Random Field Autoencoders for Unsupervised Structured Prediction
    Conditional Random Field Autoencoders for Unsupervised Structured Prediction Waleed Ammar Chris Dyer Noah A. Smith School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213, USA {wammar,cdyer,nasmith}@cs.cmu.edu Abstract We introduce a framework for unsupervised learning of structured predictors with overlapping, global features. Each input’s latent representation is predicted con- ditional on the observed data using a feature-rich conditional random field (CRF). Then a reconstruction of the input is (re)generated, conditional on the latent struc- ture, using a generative model which factorizes similarly to the CRF. The autoen- coder formulation enables efficient exact inference without resorting to unrealistic independence assumptions or restricting the kinds of features that can be used. We illustrate connections to traditional autoencoders, posterior regularization, and multi-view learning. We then show competitive results with instantiations of the framework for two canonical tasks in natural language processing: part-of-speech induction and bitext word alignment, and show that training the proposed model can be substantially more efficient than a comparable feature-rich baseline. 1 Introduction Conditional random fields [24] are used to model structure in numerous problem domains, includ- ing natural language processing (NLP), computational biology, and computer vision. They enable efficient inference while incorporating rich features that capture useful domain-specific insights. De- spite their ubiquity in supervised
    [Show full text]