Gated Recursive Neural Network for Chinese Word Segmentation

Total Page:16

File Type:pdf, Size:1020Kb

Gated Recursive Neural Network for Chinese Word Segmentation Gated Recursive Neural Network for Chinese Word Segmentation Xinchi Chen, Xipeng Qiu∗, Chenxi Zhu, Xuanjing Huang Shanghai Key Laboratory of Intelligent Information Processing, Fudan University School of Computer Science, Fudan University 825 Zhangheng Road, Shanghai, China {xinchichen13,xpqiu,czhu13,xjhuang}@fudan.edu.cn Abstract B M E S Recently, neural network models for natu- ral language processing tasks have been in- creasingly focused on for their ability of al- leviating the burden of manual feature en- gineering. However, the previous neural 下 雨 天 地 面 积 水 models cannot extract the complicated fea- Rainy Day Ground Accumulated water ture compositions as the traditional meth- ods with discrete features. In this paper, Figure 1: Illustration of our model for Chinese we propose a gated recursive neural net- word segmentation. The solid nodes indicate the work (GRNN) for Chinese word segmen- active neurons, while the hollow ones indicate the tation, which contains reset and update suppressed neurons. Specifically, the links denote gates to incorporate the complicated com- the information flow, where the solid edges de- binations of the context characters. Since note the acceptation of the combinations while the GRNN is relative deep, we also use a dashed edges means rejection of that. As shown in supervised layer-wise training method to the right figure, we receive a score vector for tag- avoid the problem of gradient diffusion. ging target character “地” by incorporating all the Experiments on the benchmark datasets combination information. show that our model outperforms the pre- vious neural network models as well as the state-of-the-art methods. Recently, neural network models have been in- 1 Introduction creasingly focused on for their ability to minimize the effort in feature engineering. Collobert et al. Unlike English and other western languages, Chi- (2011) developed a general neural network archi- nese do not delimit words by white-space. There- tecture for sequence labeling tasks. Following this fore, word segmentation is a preliminary and im- work, many methods (Zheng et al., 2013; Pei et portant pre-process for Chinese language process- al., 2014; Qi et al., 2014) applied the neural net- ing. Most previous systems address this problem work to Chinese word segmentation and achieved by treating this task as a sequence labeling prob- a performance that approaches the state-of-the-art lem and have achieved great success. Due to the methods. nature of supervised learning, the performance of However, these neural models just concatenate these models is greatly affected by the design of the embeddings of the context characters, and feed features. These features are explicitly represented them into neural network. Since the concatena- by the different combinations of context charac- tion operation is relatively simple, it is difficult to ters, which are based on linguistic intuition and sta- model the complicated features as the traditional tistical information. However, the number of fea- discrete feature based models. Although the com- tures could be so large that the result models are plicated interactions of inputs can be modeled by too large to use in practice and prone to overfit on the deep neural network, the previous neural model training corpus. shows that the deep model cannot outperform the ∗Corresponding author. one with a single non-linear model. Therefore, the 1744 Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 1744–1753, Beijing, China, July 26-31, 2015. c 2015 Association for Computational Linguistics neural model only captures the interactions by the Input Window C C C C C simple transition matrix and the single non-linear Characters i-2 i-1 i i+1 i+2 transformation . These dense features extracted via Lookup Table 1 these simple interactions are not nearly as good as 2 3 4 the substantial discrete features in the traditional 5 6 Features · · · · · · methods. · · · · · · · · · · · · d-1 In this paper, we propose a gated recursive neu- d ral network (GRNN) to model the complicated Concatenate combinations of characters, and apply it to Chi- Linear · · · W1 ×□+b1 nese word segmentation task. Inspired by the suc- Number of Hidden Units cess of gated recurrent neural network (Chung et Sigmoid · al., 2014), we introduce two kinds of gates to con- g(□) · · trol the combinations in recursive structure. We Number of Hidden Units also use the layer-wise training method to avoid Linear · · · the problem of gradient diffusion, and the dropout W ×□+b 2 2 Number of tags strategy to avoid the overfitting problem. Tag Inference Figure 1 gives an illustration of how our ap- Aij proach models the complicated combinations of B the context characters. Given a sentence “雨 E (Rainy) 天 (Day) 地面 (Ground) 积水 (Accumu- M lated water)”, the target character is “地”. This S sentence is very complicated because each consec- f(t|1) f(t|2) f(t|i) f(t|n-1) f(t|n) utive two characters can be combined as a word. To predict the label of the target character “地” un- Figure 2: General architecture of neural model for der the given context, GRNN detects the combina- Chinese word segmentation. tions recursively from the bottom layer to the top. Then, we receive a score vector of tags by incorpo- problem. Each character is labeled as one of {B, rating all the combination information in network. M, E, S} to indicate the segmentation. {B, M, E} The contributions of this paper can be summa- represent Begin, Middle, End of a multi-character rized as follows: segmentation respectively, and S represents a Sin- • We propose a novel GRNN architecture to gle character segmentation. model the complicated combinations of the The general neural network architecture for Chi- context characters. GRNN can select and pre- nese word segmentation task is usually character- serve the useful combinations via reset and ized by three specialized layers: (1) a character update gates. These combinations play a sim- embedding layer; (2) a series of classical neural ilar role in the feature engineering of the tra- network layers and (3) tag inference layer. A il- ditional methods with discrete features. lustration is shown in Figure 2. The most common tagging approach is based on • We evaluate the performance of Chinese a local window. The window approach assumes word segmentation on PKU, MSRA and that the tag of a character largely depends on its CTB6 benchmark datasets which are com- neighboring characters. monly used for evaluation of Chinese word Firstly, we have a character set of size . C |C| segmentation. Experiment results show that Then each character c is mapped into an d- our model outperforms other neural network ∈ C d dimensional embedding space as c R by a models, and achieves state-of-the-art perfor- d ∈ lookup table M R ×|C|. mance. ∈ For each character ci in a given sentence c1:n, the context characters ci w1:i+w2 are mapped 2 Neural Model for Chinese Word − Segmentation to their corresponding character embeddings as ci w1:i+w2 , where w1 and w2 are left and right Chinese word segmentation task is usually re- context− lengths respectively. Specifically, the un- garded as a character-based sequence labeling known characters and characters exceeding the 1745 sentence boundaries are mapped to special sym- E M B S bols, “unknown”, “start” and “end” respectively. yi In addition, w1 and w2 satisfy the constraint w1 + Linear …… x w2 + 1 = w, where w is the window size of the yi = Ws × xi + bs i model. As an illustration in Figure 2, w1, w2 and Concatenate … … … … … … … … … … … … … … … … … … … … w are set to 2, 2 and 5 respectively. … … … … … … … … … … The embeddings of all the context characters are H1 then concatenated into a single vector ai R as ∈ input of the neural network, where H = w d is 1 × the size of Layer 1. And ai is then fed into a con- ventional neural network layer which performs a linear transformation followed by an element-wise activation function g, such as tanh. hi = g(W1ai + b1), (1) H2 H1 H2 H2 where W1 R × , b1 R , hi R . H2 ∈ ∈ ∈ is the number of hidden units in Layer 2. Here, w, H1 and H2 are hyper-parameters chosen on devel- … … … … … opment set. … … … … … Then, a similar linear transformation is per- formed without non-linear function followed: ci-2 ci-1 ci ci+1 ci+2 f(t ci w1:i+w2 ) = W2hi + b2, (2) | − Figure 3: Architecture of Gated Recursive Neural T H2 T Network for Chinese word segmentation. where W2 R| |× , b2 R| | and T is the set of 4 possible∈ tags. Each∈ dimension of vector T f(t ci w1:i+w2 ) R| | is the score of the corre- their parent node h will be a -dimensional vec- | − ∈ P d sponding tag. tor as well, calculated as: To model the tag dependency, a transition score Aij is introduced to measure the probability of hL hP = g W , (3) jumping from tag i T to tag j T (Collobert et hR al., 2011). ∈ ∈ ( [ ]) d 2d where W R × and g is a non-linear function 3 Gated Recursive Neural Network for as mentioned∈ above. Chinese Word Segmentation 3.2 Gated Recursive Neural Network To model the complicated feature combinations, we propose a novel gated recursive neural network The RNN need a topological structure to model a (GRNN) architecture for Chinese word segmenta- sequence, such as a syntactic tree. In this paper, we tion task (see Figure 3). use a directed acyclic graph (DAG), as showing in Figure 3, to model the combinations of the input 3.1 Recursive Neural Network characters, in which two consecutive nodes in the A recursive neural network (RNN) is a kind of lower layer are combined into a single node in the deep neural network created by applying the same upper layer via the operation as Eq.
Recommended publications
  • Sentence Modeling with Gated Recursive Neural Network
    Sentence Modeling with Gated Recursive Neural Network Xinchi Chen, Xipeng Qiu,∗ Chenxi Zhu, Shiyu Wu, Xuanjing Huang Shanghai Key Laboratory of Intelligent Information Processing, Fudan University School of Computer Science, Fudan University 825 Zhangheng Road, Shanghai, China xinchichen13,xpqiu,czhu13,syu13,xjhuang @fudan.edu.cn { } Abstract Recently, neural network based sentence modeling methods have achieved great progress. Among these methods, the re- I cannot agree with you more I cannot agree with you more cursive neural networks (RecNNs) can ef- fectively model the combination of the Figure 1: Example of Gated Recursive Neural words in sentence. However, RecNNs Networks (GRNNs). Left is a GRNN using a di- need a given external topological struc- rected acyclic graph (DAG) structure. Right is a ture, like syntactic tree. In this paper, GRNN using a full binary tree (FBT) structure. we propose a gated recursive neural net- (The green nodes, gray nodes and white nodes work (GRNN) to model sentences, which illustrate the positive, negative and neutral senti- employs a full binary tree (FBT) struc- ments respectively.) ture to control the combinations in re- cursive structure. By introducing two kinds of gates, our model can better model to model sentences. However, DAG structure is the complicated combinations of features. relatively complicated. The number of the hidden Experiments on three text classification neurons quadraticly increases with the length of datasets show the effectiveness of our sentences so that grConv cannot effectively deal model. with long sentences. Inspired by grConv, we propose a gated recur- 1 Introduction sive neural network (GRNN) for sentence model- ing. Different with grConv, we use the full binary Recently, neural network based sentence modeling tree (FBT) as the topological structure to recur- approaches have been increasingly focused on for sively model the word combinations, as shown in their ability to minimize the efforts in feature en- Figure 1.
    [Show full text]
  • Deep Recursive Neural Networks for Compositionality in Language
    Deep Recursive Neural Networks for Compositionality in Language Ozan Irsoy˙ Claire Cardie Department of Computer Science Department of Computer Science Cornell University Cornell University Ithaca, NY 14853 Ithaca, NY 14853 [email protected] [email protected] Abstract Recursive neural networks comprise a class of architecture that can operate on structured input. They have been previously successfully applied to model com- positionality in natural language using parse-tree-based structural representations. Even though these architectures are deep in structure, they lack the capacity for hierarchical representation that exists in conventional deep feed-forward networks as well as in recently investigated deep recurrent neural networks. In this work we introduce a new architecture — a deep recursive neural network (deep RNN) — constructed by stacking multiple recursive layers. We evaluate the proposed model on the task of fine-grained sentiment classification. Our results show that deep RNNs outperform associated shallow counterparts that employ the same number of parameters. Furthermore, our approach outperforms previous baselines on the sentiment analysis task, including a multiplicative RNN variant as well as the re- cently introduced paragraph vectors, achieving new state-of-the-art results. We provide exploratory analyses of the effect of multiple layers and show that they capture different aspects of compositionality in language. 1 Introduction Deep connectionist architectures involve many layers of nonlinear information processing [1]. This allows them to incorporate meaning representations such that each succeeding layer potentially has a more abstract meaning. Recent advancements in efficiently training deep neural networks enabled their application to many problems, including those in natural language processing (NLP).
    [Show full text]
  • Analysis Methods in Neural Language Processing: a Survey
    Analysis Methods in Neural Language Processing: A Survey Yonatan Belinkov1;2 and James Glass1 1MIT Computer Science and Artificial Intelligence Laboratory 2Harvard School of Engineering and Applied Sciences Cambridge, MA, USA fbelinkov, [email protected] Abstract the networks in different ways.1 Others strive to better understand how NLP models work. This The field of natural language processing has theme of analyzing neural networks has connec- seen impressive progress in recent years, with tions to the broader work on interpretability in neural network models replacing many of the machine learning, along with specific characteris- traditional systems. A plethora of new mod- tics of the NLP field. els have been proposed, many of which are Why should we analyze our neural NLP mod- thought to be opaque compared to their feature- els? To some extent, this question falls into rich counterparts. This has led researchers to the larger question of interpretability in machine analyze, interpret, and evaluate neural net- learning, which has been the subject of much works in novel and more fine-grained ways. In debate in recent years.2 Arguments in favor this survey paper, we review analysis meth- of interpretability in machine learning usually ods in neural language processing, categorize mention goals like accountability, trust, fairness, them according to prominent research trends, safety, and reliability (Doshi-Velez and Kim, highlight existing limitations, and point to po- 2017; Lipton, 2016). Arguments against inter- tential directions for future work. pretability typically stress performance as the most important desideratum. All these arguments naturally apply to machine learning applications in NLP. 1 Introduction In the context of NLP, this question needs to be understood in light of earlier NLP work, often The rise of deep learning has transformed the field referred to as feature-rich or feature-engineered of natural language processing (NLP) in recent systems.
    [Show full text]
  • Sentiment Classification Using Tree-Based Gated Recurrent Units
    BOOK SPINE Sentiment Classification Using Tree‐Based Gated Recurrent Units Vasileios Tsakalos Dissertation presented as partial requirement for obtaining the Master’s degree in Information Management ii Sentiment Classification Using Tree-Based Gated Recurrent Units Copyright © Vasileios Tsakalos, NOVA Information Management School, NOVA Uni- versity of Lisbon. The NOVA Information Management School and the NOVA University of Lisbon have the right, perpetual and without geographical boundaries, to file and publish this dis- sertation through printed copies reproduced on paper or on digital form, or by any other means known or that may be invented, and to disseminate through scientific repositories and admit its copying and distribution for non-commercial, educational or research purposes, as long as credit is given to the author and editor. This document was created using the (pdf)LATEX processor, based in the “novathesis” template[1], developed at the Dep. Informática of FCT-NOVA [2]. [1] https://github.com/joaomlourenco/novathesis [2] http://www.di.fct.unl.pt Acknowledgements I would like to thank my supervisor professor, Dr. Roberto Henriques, for helping me structure the report, advising me with regards to the mathematically complexity re- quired. Moreover I would like to thank my brother, Evangelos Tsakalos, for providing me with the computational power to run the experiments and advising me regarding the subject of my master thesis. v Abstract Natural Language Processing is one of the most challenging fields of Artificial Intelli- gence. The past 10 years, this field has witnessed a fascinating progress due to Deep Learning. Despite that, we haven’t achieved to build an architecture of models that can understand natural language as humans do.
    [Show full text]
  • Recent Trends in Deep Learning Based Natural Language Processing
    1 Recent Trends in Deep Learning Based Natural Language Processing Tom Young†≡, Devamanyu Hazarika‡≡, Soujanya Poria⊕≡, Erik Cambria5∗ y School of Information and Electronics, Beijing Institute of Technology, China z School of Computing, National University of Singapore, Singapore ⊕ Temasek Laboratories, Nanyang Technological University, Singapore 5 School of Computer Science and Engineering, Nanyang Technological University, Singapore Abstract Deep learning methods employ multiple processing layers to learn hierarchical representations of data, and have produced state-of-the-art results in many domains. Recently, a variety of model designs and methods have blossomed in the context of natural language processing (NLP). In this paper, we review significant deep learning related models and methods that have been employed for numerous NLP tasks and provide a walk-through of their evolution. We also summarize, compare and contrast the various models and put forward a detailed understanding of the past, present and future of deep learning in NLP. Index Terms Natural Language Processing, Deep Learning, Word2Vec, Attention, Recurrent Neural Networks, Convolutional Neural Net- works, LSTM, Sentiment Analysis, Question Answering, Dialogue Systems, Parsing, Named-Entity Recognition, POS Tagging, Semantic Role Labeling I. INTRODUCTION Natural language processing (NLP) is a theory-motivated range of computational techniques for the automatic analysis and representation of human language. NLP research has evolved from the era of punch cards and batch processing, in which the analysis of a sentence could take up to 7 minutes, to the era of Google and the likes of it, in which millions of webpages can be processed in less than a second [1]. NLP enables computers to perform a wide range of natural language related tasks at all levels, ranging from parsing and part-of-speech (POS) tagging, to machine translation and dialogue systems.
    [Show full text]
  • Recognizing Logical Entailment: Reasoning with Recursive and Recurrent Neural Networks
    Recognizing Logical Entailment: Reasoning with Recursive and Recurrent Neural Networks MSc Thesis (Afstudeerscriptie) written by Mathijs S. Mul (born October 14th, 1993 in Nottingham, United Kingdom) under the supervision of Dr. Willem Zuidema, and submitted to the Board of Examiners in partial fulfillment of the requirements for the degree of MSc in Logic at the Universiteit van Amsterdam. Date of the public defense: Members of the Thesis Committee: July 9th, 2018 Dr. Iacer Calixto Dr. Raquel Fernández Dr. Jakub Szymanik Prof. Dr. Yde Venema (chair) Dr. Willem Zuidema 0 Recognizing Logical Entailment: Reasoning with Recursive and Recurrent Neural Networks Mathijs S. Mul Abstract This thesis studies the ability of several compositional distributional models to recog- nize logical entailment relations. We first investigate the performance of recursive neural matrix and tensor networks on an artificially generated data set labelled according to a natural logic calculus. Several issues with this set-up are identified, which we aim to solve by introducing a new task: First-Order Entailment Recognition. In combination with an automated theorem prover and model builder, an artificial language with higher grammatical complexity is used to generate a new data set, whose labels are determined by the semantics of first-order logic. The tree-shaped networks perform well on the new task, as opposed to a bag-of-words baseline. Qualitative analysis is performed to re- veal meaningful clusters on the level of word embeddings and sentence vectors. A novel recurrent architecture is proposed and evaluated on the same task. The high testing scores obtained by GRU cells in particular prove that recurrent models can learn to apply first-order semantics without any cues about syntactic structure or lexicalization.
    [Show full text]
  • A Deep Learning Framework for Computational Linguistics Neural Network Models
    NNBlocks: A Deep Learning Framework for Computational Linguistics Neural Network Models *Frederico Tommasi Caroli, *Joao˜ Carlos Pereira da Silva, **Andre´ Freitas, **Siegfried Handschuh *Universidade Federal do Rio de Janeiro, **Universitat¨ Passau *Rio de Janeiro, **Passau [email protected], [email protected], [email protected], [email protected] Abstract Lately, with the success of Deep Learning techniques in some computational linguistics tasks, many researchers want to explore new models for their linguistics applications. These models tend to be very different from what standard Neural Networks look like, limiting the possibility to use standard Neural Networks frameworks. This work presents NNBlocks, a new framework written in Python to build and train Neural Networks that are not constrained by a specific kind of architecture, making it possible to use it in computational linguistics. Keywords: Deep Learning, Artificial Neural Network, Computational Linguistics 1. Introduction them and discuss what points could be improved. 1 Neural Networks (NN) are a kind of statistical learning The first framework we will talk about is Lasagne. methods that has been gaining a lot of attention with the This framework, like others, has an approach similar to appearance of a variety of techniques that make possible NNBlocks, in which processing blocks are connected with the so-called Deep Learning (DL). DL happens when a each other to create a complete NN. This framework is able statistical model, normally a NN, learns layers of high- to build a complex processing graph with its blocks and level abstractions in the data. This idea became popular is somewhat easy to be extended with new blocks.
    [Show full text]
  • A Deep Learning Approach Combining CNN and Bi-LSTM with SVM Classifier for Arabic Sentiment Analysis
    (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 12, No. 6, 2021 A Deep Learning Approach Combining CNN and Bi-LSTM with SVM Classifier for Arabic Sentiment Analysis Omar Alharbi Computer Department Jazan University, Jazan, Saudi Arabia Abstract—Deep learning models have recently been proven to pronouns of MSA [4]. Thus, there are no standard rules that be successful in various natural language processing tasks, can handle the morphological or syntactic aspects of reviews including sentiment analysis. Conventionally, a deep learning written in MSA with the presence of dialects. model’s architecture includes a feature extraction layer followed by a fully connected layer used to train the model parameters One of the prominent approaches employed in SA is and classification task. In this paper, we employ a deep learning machine learning (ML). In this context, SA is formalized as a model with modified architecture that combines Convolutional classification problem which is addressed by using algorithms Neural Network (CNN) and Bidirectional Long Short-Term like Naive Bayes (NB), Support Vector Machine (SVM) and Memory (Bi-LSTM) for feature extraction, with Support Vector Maximum Entropy (MaxEnt). In fact, ML has proved its Machine (SVM) for Arabic sentiment classification. In efficiency and competence; therefore, in the past two decades, particular, we use a linear SVM classifier that utilizes the it mostly dominates the SA task. However, a fundamental embedded vectors obtained from CNN and Bi-LSTM for polarity problem that would decay the performance of ML is the text classification of Arabic reviews. The proposed method was tested representation through which the features are created to be fed on three publicly available datasets.
    [Show full text]
  • Recurrent Neural Network Based Loanwords Identification in Uyghur
    PACLIC 30 Proceedings Recurrent Neural Network Based Loanwords Identification in Uyghur Chenggang Mi1,2, Yating Yang1,2,3, Xi Zhou1,2, Lei Wang1,2, Xiao Li1,2 and Tonghai Jiang1,2 1Xinjiang Technical Institute of Physics and Chemistry of Chinese Academy of Sciences, Urumqi, 830011, China 2Xinjiang Laboratory of Minority Speech and Language Information Processing, Xinjiang Technical Institute of Physics and Chemistry of Chinese Academy of Sciences, Urumqi, 830011, China 3Institute of Acoustics of the Chinese Academy of Sciences, Beijing, 100190, China {micg,yangyt,zhouxi,wanglei,xiaoli,jth}@ms.xjb.ac.cn Abstract as an important factor in comparable corpora build- ing. And comparable corpora are vital resources in Comparable corpus is the most important re- source in several NLP tasks. However, it is parallel corpus detection (Munteanu et al., 2006). very expensive to collect manually. Lexical Additionally, loanwords can be integrated into bilin- borrowing happened in almost all languages. gual dictionaries directly. Therefore, loanwords are We can use the loanwords to detect useful valuable to study in several NLP tasks such as ma- bilingual knowledge and expand the size of chine translation, information extraction and infor- donor-recipient / recipient-donor comparable mation retrieval. corpora. In this paper, we propose a recur- In this paper, we design a novel model to iden- rent neural network (RNN) based framework tify loanwords (Chinese, Russian and Arabic) from to identify loanwords in Uyghur. Addition- ally, we suggest two features: inverse lan- Uyghur texts. Our model based on a RNN Encoder- guage model feature and collocation feature Decoder framework (Cho et al., 2014). The En- to improve the performance of our model.
    [Show full text]
  • Convolutional Neural Tensor Network Architecture for Community-Based Question Answering
    Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence (IJCAI 2015) Convolutional Neural Tensor Network Architecture for Community-based Question Answering Xipeng Qiu and Xuanjing Huang Shanghai Key Laboratory of Data Science, Fudan University School of Computer Science, Fudan University 825 Zhangheng Road, Shanghai, China [email protected], [email protected] Abstract Query: Q: Why is my laptop screen blinking? Retrieving similar questions is very important in Expected: community-based question answering. A major Q1: How to troubleshoot a flashing screen on an challenge is the lexical gap in sentence match- LCD monitor? ing. In this paper, we propose a convolutional Not Expected: neural tensor network architecture to encode the Q2: How to make text blink on screen with Power- sentences in semantic space and model their in- Point? teractions with a tensor layer. Our model inte- grates sentence modeling and semantic matching Table 1: An example on question retrieval into a single model, which can not only capture the useful information with convolutional and pool- ing layers, but also learn the matching metrics be- The state-of-the-art studies [Blooma and Kurian, 2011] tween the question and its answer. Besides, our mainly focus on finding textual clues to improve the similarity model is a general architecture, with no need for function, such as translation-based [Xue et al., 2008; Zhou et the other knowledge such as lexical or syntac- al., 2011] or syntactic-based approaches [Wang et al., 2009; tic analysis. The experimental results shows that Carmel et al., 2014]. However, the improvements of these ap- our method outperforms the other methods on two proaches are limited.
    [Show full text]
  • Escuela Técnica Superior De Ingenieros Industriales Y De Telecomunicación Trabajo Fin De Máster Explorando Redes Neuronales C
    Escuela Técnica Superior de Ingenieros Industriales y de Telecomunicación UNIVERSIDAD DE CANTABRIA Trabajo Fin de Máster Explorando redes neuronales convolucionales para reconocimiento de objetos en imágenes RGB Exploring Convolutional Neuronal Networks for RGB-based Object Recognition To achieve the title of Master in Electronics for Embedded Systems Author: Matteo Pera Supervisors: Prof. Pablo Pedro Sanchez Espeso Prof. Edgar Ernesto Sanchez Sanchez September - 2020 Contents 1 Introduction 1 1.1 Context . .1 1.2 Motivation . .2 1.3 Goal . .2 2 Machine Learning 4 2.1 Definition . .4 2.1.1 Concept of learning . .4 2.1.2 Types of Machine Learning . .5 2.1.3 Machine Learning Concepts . .7 2.1.4 Noise . 10 2.2 Dataset . 12 2.2.1 Data Splits . 12 2.2.2 Cross-Validation & Early Stopping . 13 2.2.3 Washington RGB . 15 2.3 Neural Networks . 16 2.3.1 Neurons . 16 2.3.2 Perceptron . 22 2.3.3 Multi-Layer Perceptron . 27 2.4 CNN & RNN . 30 2.4.1 Convolutional Neural Network . 30 2.4.2 Recurrent Neural Network . 33 2.5 Software . 34 2.5.1 Keras . 34 2.5.2 Matlab . 39 3 Development 40 3.1 CRNN for Object recognition . 40 3.1.1 VGG-f . 40 3.1.2 Recursive Neural Network . 41 3.1.3 CRNN . 42 i 3.2 Training & Propagation . 45 3.2.1 Training . 45 3.2.2 Propagation . 52 3.3 Results . 55 4 Conclusions 58 Appendices 60 A Appendix A 61 B Appendix B 63 B.1 Training Code . 63 B.2 Propagation Code .
    [Show full text]
  • A Primer on Neural Network Models for Natural Language Processing
    Journal of Artificial Intelligence Research 57 (2016) 345{420 Submitted 9/15; published 11/16 A Primer on Neural Network Models for Natural Language Processing Yoav Goldberg [email protected] Computer Science Department Bar-Ilan University, Israel Abstract Over the past few years, neural networks have re-emerged as powerful machine-learning models, yielding state-of-the-art results in fields such as image recognition and speech processing. More recently, neural network models started to be applied also to textual natural language signals, again with very promising results. This tutorial surveys neural network models from the perspective of natural language processing research, in an attempt to bring natural-language researchers up to speed with the neural techniques. The tutorial covers input encoding for natural language tasks, feed-forward networks, convolutional networks, recurrent networks and recursive networks, as well as the computation graph abstraction for automatic gradient computation. 1. Introduction For over a decade, core NLP techniques were dominated by machine-learning approaches that used linear models such as support vector machines or logistic regression, trained over very high dimensional yet very sparse feature vectors. Recently, the field has seen some success in switching from such linear models over sparse inputs to non-linear neural-network models over dense inputs. While most of the neural network techniques are easy to apply, sometimes as almost drop-in replacements of the old linear classifiers, there is in many cases a strong barrier of entry. In this tutorial I attempt to provide NLP practitioners (as well as newcomers) with the basic background, jargon, tools and methodology that will allow them to understand the principles behind the neural network models and apply them to their own work.
    [Show full text]