
Concept Extraction with Convolutional Neural Networks Andreas Waldis, Luca Mazzola and Michael Kaufmann Lucerne University of Applied Sciences, School of Information Technology, 6343 - Rotkreuz, Switzerland Keywords: Natural Language Processing, Concept Extraction, Convolutional Neural Network. Abstract: For knowledge management purposes, it would be interesting to classify and tag documents automatically based on their content. Concept extraction is one way of achieving this automatically by using statistical or semantic methods. Whereas index-based keyphrase extraction can extract relevant concepts for documents, the inverse document index grows exponentially with the number of words that candidate concpets can have. To adress this issue, the present work trains convolutional neural networks (CNNs) containing vertical and ho- rizontal filters to learn how to decide whether an N-gram (i.e, a consecutive sequence of N characters or words) is a concept or not, from a training set with labeled examples. The classification training signal is derived from the Wikipedia corpus, knowing that an N-gram certainly represents a concept if a corresponding Wikipedia page title exists. The CNN input feature is the vector representation of each word, derived from a word em- bedding model; the output is the probability of an N-gram to represent a concept. Multiple configurations for vertical and horizontal filters were analyzed and configured through a hyper-parameterization process. The results demonstrated precision of between 60 and 80 percent on average. This precision decreased drastically as N increased. However, combined with a TF-IDF based relevance ranking, the top five N-gram concepts calculated for Wikipedia articles showed a high precision of 94%, similar to part-of-speech (POS) tagging for concept recognition combined with TF-IDF, but with a much better recall for higher N. CNN seems to prefer longer sequences of N-grams as identified concepts, and can also correctly identify sequences of words normally ignored by other methods. Furthermore, in contrast to POS filtering, the CNN method does not rely on predefined rules, and could thus provide language-independent concept extraction. 1 INTRODUCTION context, a concept is an idiomatic construction that conveys a meaning for humans. We use the English The research project Cross-Platform Mediation, As- Wikipedia as labeled training corpus, We know that sociation and Search Engine (XMAS) is aimed at cre- all Wikipedia entry titles are ceratinly concepts, and ating a knowledge management tool based on auto- our algorithm uses the existence of a Wikipedia en- mated document tagging by recognition of N-gram try for a given word cominbation as training signal. concepts represented by sequences of one or more The trained neural network should be able to recog- words. In an earlier project stage, XMAS combi- nize N-gram concepts in a given text. Those concepts ned TF-IDF approaches with POS-based natural lan- can be be used for entity extraction and automatic tag- guage processing (NLP) methods to extract concepts ging without building a huge N-gram-based inverse with N-gram keyword extraction. The goal of this document index. Such an algorithm that delivers pro- combined approach is to build an index-based mo- per N-gram concepts, regardless of the category and del for automatic N-gram keyword extraction (Sieg- the size of the corpus, can increase the value of the fried and Waldis, 2017) by indexing all N-grams, fil- XMAS project. tering them by relevant POS-patterns, and then ran- king the relevance of N-grams for documents using TF-IDF. However this model needs to index all com- binations of N-grams, which inflates over-linearly the 2 STATE OF THE ART index size as N increases. To adress this issue, the pur- pose of this research is to evaluate a neural network Twenty years ago, the importance of N-Grams for text based algorithm to decide whether N-grams (i.e., con- classification was shown (Furnkranz,¨ 1998). Many secutive sequence of N words) are concepts. In this statistical and semantic methods have been proposed 118 Waldis, A., Mazzola, L. and Kaufmann, M. Concept Extraction with Convolutional Neural Networks. DOI: 10.5220/0006901201180129 In Proceedings of the 7th International Conference on Data Science, Technology and Applications (DATA 2018), pages 118-129 ISBN: 978-989-758-318-6 Copyright © 2018 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved Concept Extraction with Convolutional Neural Networks for concepts extraction. The use case described by 2.2 Word Embeddings (Zhang et al., 2016) is an example for the usage of traditional neural networks, and (Das et al., 2013) for Embeddings f : X , Y maps an object from a space the statistical approach. More recently, deep learning X to another object→ of the space Y. One of the usa- (an extension of neural networks with multiples hid- ges of embeddings in the field of NLP is, for ex- den layers) is gaining relevance for all aspects of NLP ample, to map a word (an item of the space of all as mentioned by (Lopez and Kalita, 2017). words) to a vector in a high-dimensional space. Since these vectors have numerical nature, a wide range 2.1 Automated Concept Extraction of algorithms can use them. The three mainly used embedding algorithms are Word2Vec (Rong, 2014), Concepts extraction as a field of concept mining divi- GloVe (Westphal and Pei, 2009), and fastText (Jou- des phrases into sequences of consecutive words clas- lin et al., 2016). While GloVe uses statistical infor- sified as concepts and non-concepts. According to mation of a word, Word2Vec and fastText adopt co- (Parameswaran et al., 2010) concepts are useful by occurrence information to build a model. They calcu- providing standalone information, in contrast to any late word embeddings based on either the continuous random non-concepts. This information, as in (Dalvi bag of words (CBOW) model or the skip-gram model et al., 2009), can be categorized as object, entity, of (Mikolov et al., 2013). Those latter models predict event, or topic. This additional classification takes respectively (CBOW) a word based on surrounding the name of named entity recognition(NER). For in- words or skip-gram surrounding words based on one stance, the string ”the Guardian newspaper was foun- word. CBOW and skip-gram rely on an input matrix ded in 1821” contains 28 N-grams with the length of (WI) and an output matrix (WO) as weight matrices. one to seven. The concept ”Guardian newspaper” is Those randomly initialized matrices are updated after one of them and has a significantly higher information each training iteration. The purpose of these matri- level than the non-oncept ”newspaper was founded ces is to connect the neural network input layer to the in”. hidden layer through WI and the hidden layer to the There are several different approaches for deci- output layer through WO. In both methods WI has ding whether a phrase is a concept. (Parameswaran the dimensions V N and WO the dimensions N V, et al., 2010) showed a combination of linguistic ru- where V represents× the size of the vocabulary and× N les and statistical methods. They defined these rules the size of the hidden layer. After optimizing these to characterize possible concepts and filter out non- weight matrices, they can be used as a dictionary to concepts. For example, a candidate has to contain a obtain a vector for a specific word~h =~x WI, as dis- minimum of one noun and is not allowed to start or cussed in (Rong, 2014). ∗ end with a verb, a conjunction, or a pronoun. After Nevertheless, Word2Vec is only able to compute filtering out non-candidates, the remaining ones are vectors for trained words, as it uses the vector of the judged by their relative confidence. This is a metric whole word. One main advantage of fastText is the to help decide if a sub-/super-concept of the candidate possibility of getting a word vector of an unknown actually fits better as a concept. For example, ”Guar- word. To achieve this, it uses the vector’s sum of se- dian newspaper” is a better choice than ”Guardian quences of included characters of one word, instead newspaper was founded in 1821” because of the hig- of one word as a whole. For example, where, enri- her relative confidence. ched by the padding symbols < and >, is represented Another method is shown in (Liu et al., 2016) with by <wh, whe, her, ere, and er>. regards to Chinese bi-grams. Like (Parameswaran et al., 2010) they combine statistical methods with lin- guistic rules as well, but in contrast, they first calcu- 2.3 Convolutional Neural Networks late the statistical metric and then filter out the results with linguistic rules. For measurement, they used As a variation of neural networks (NNs), convolution the mutual information (MI) and document frequency neural networks (CNNs) are often used in computer (DF) metrics. MI represents the joint probability with vision for tasks such as image classification and object respect to the product of the individual probabilities recognition. Usually, they use a matrix representation for two words in a bi-gram. Since MI tends to pre- of an image as an input and a combination of diffe- fer rare words, they used the DF value to reduce the rent hidden layers to transform the input into a certain influence of low-frequency words, as it takes into ac- category or object. These layers are used to analyze count the number of documents containing a bi-gram, specific aspects of the image or to reduce its dimensi- normalized by the total number of documents. onality. Word embedding enables the numeric repre- sentation of words, and the representation of N-grams 119 DATA 2018 - 7th International Conference on Data Science, Technology and Applications as a matrix.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages12 Page
-
File Size-