
A Study of Various Text Augmentation Techniques for Relation Classification in Free Text Praveen Kumar Badimala Giridhara1;2;∗, Chinmaya Mishra1;2;∗, Reddy Kumar Modam Venkataramana1;∗, Syed Saqib Bukhari2 and Andreas Dengel1;2 1Department of Computer Science, TU Kaiserslautern, Gottlieb-Daimler-Straße 47, Kaiserslautern, Germany 2German Research Center for Artificial Intelligence (DFKI), Kaiserslautern, Germany Keywords: Relation Classification, Text Data Augmentation, Natural Language Processing, Investigative Study. Abstract: Data augmentation techniques have been widely used in visual recognition tasks as it is easy to generate new data by simple and straight forward image transformations. However, when it comes to text data augmen- tations, it is difficult to find appropriate transformation techniques which also preserve the contextual and grammatical structure of language texts. In this paper, we explore various text data augmentation techniques in text space and word embedding space. We study the effect of various augmented datasets on the efficiency of different deep learning models for relation classification in text. 1 INTRODUCTION However, it is very challenging to find an appropri- ate method for text data augmentation as it is difficult Relation classification is an important task in pro- to preserve grammar and semantics. cessing free text using Natural Language Processing As per our knowledge, there have been no work (NLP). The basic idea behind relation classification that studies different text data augmentation tech- is to classify a sentence into a predefined relation niques on relation classification for free text over dif- class, given the two entities in the sentence. For in- ferent Deep Learning Models as yet. In this paper, stance, in the sentence; < e1 > Bill Gates < =e1 > we investigate various text data augmentation tech- is the founder of < e2 > Microsoft < =e2 >,“Bill niques while retaining the grammatical and contex- Gates” and “Microsoft” are the two entities denoted tual structure of the sentences when applying them. by “< e1 >< =e1 >” and “< e2 >< =e2 >” respec- We also observe the behavior of using augmented tively. The relation classification process should be datasets with the help of two different deep learning able to classify this sentence as belonging to the rela- models namely, CNN (Zeng et al., 2014) and Atten- tion class “founderOf” (which is a predefined class). tion based BLSTM (Zhou et al., 2016). One of the major limitations in the field of NLP is the unavailability of labelled data. It takes a great deal of time and effort to manually annotate relevant datasets. 2 RELATED WORKS Hence, it has become increasingly necessary to come up with automated annotation methods. This has led to the development of many semi-supervised annota- Data Augmentation in the field of Image Processing tion methods for generating annotated datasets. But, and Computer Vision is a well-known methodology they fall short when compared to the quality and effi- to increase the dataset by introducing varied distribu- ciency of a manually annotated dataset. tions and increase the performance of the model for One workaround would be to perform data aug- a number of tasks. In general, it is believed that the mentations on manually annotated datasets. Data aug- more the data a neural network gets trained on, the mentation techniques are very popular in the field of more effective it becomes. Augmentations are per- image processing because of the ease in generating formed by using simple image transformations such augmentations using simple image transformations. as rotation, cropping, flipping, translation and addi- tion of Gaussian noise to the image. Krizhevsky et ∗Contributed equally. al., (Krizhevsky et al., 2012) used data augmentation 360 Giridhara, P., Mishra, C., Venkataramana, R., Bukhari, S. and Dengel, A. A Study of Various Text Augmentation Techniques for Relation Classification in Free Text. DOI: 10.5220/0007311003600367 In Proceedings of the 8th International Conference on Pattern Recognition Applications and Methods (ICPRAM 2019), pages 360-367 ISBN: 978-989-758-351-3 Copyright c 2019 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved A Study of Various Text Augmentation Techniques for Relation Classification in Free Text methods to increase the training data size for train- paradigmatic way. ing a deep neural network on ImageNet dataset (Deng However, to the best of our knowledge, there have et al., 2009). The increase in training data samples been no work that specifically focuses on studying the showed reduced overfitting of the model (Krizhevsky different text data augmentation techniques on Rela- et al., 2012) and increased the model performance. tion Classification task in free text so far. These techniques enable the model to learn additional patterns in the image and identify new positional as- pects of objects in it. 3 AUGMENTATION METHODS On similar lines, data augmentation methods are explored in the field of text processing for improv- In this Section, we describe various types of augmen- ing the efficacy of models. Mueller and Thyagara- tation techniques that have been used in our experi- jan (Mueller and Thyagarajan, 2016) replaced random ments. As discussed briefly in Section 1, it is very words in a sentence with their respective synonyms, challenging to create manually annotated datasets to generate augmented data and train a siamese recur- due to which we only have a few publicly available rent network for sentence similarity task. Wang and datasets with acceptable number of training and test Yang (Wang and Yang, 2015) used word embeddings data. Due to the constraints of cost and effort, most of sentences to generate augmented data for the pur- of the manually annotated datasets have less number pose of increasing data size and trained a multi-class of samples and it becomes difficult to optimally train classifier on tweet data. They found the nearest neigh- deep learning models with the limited amount of data. bour of a word vector by using cosine similarity and The use of distant supervision methods Mintz et used that as a replacement for the original word. The al., (Mintz et al., 2009) to annotate data is able to com- word selection was done stochastically. pensate for the lack of quantity but is often suscepti- For information extraction, Papadaki (Papadaki, ble to inclusion of noise when generating the dataset. 2017) applied data augmentation techniques on le- This in turn constrains the performance of training gal dataset (Chalkidis et al., 2017). A class specific models while performing relation classification. We probability classifier was trained to identify a partic- consider applying augmentations on a manually aug- ular contract element type for each token in a sen- mented dataset as one of the ways to workaround the tence. They classified a token in a sentence based problem. on the window of words/tokens surrounding them. In our experiments, the training data was aug- They used word embeddings obtained by pre-training mented at two levels; the text level and the word em- a word2vec model (Mikolov et al., 2013) on unlabeled bedding level. Word Similarity and Synonym meth- contract data. Their work examined three data aug- ods were used to generate new texts whereas inter- mentation methods namely; interpolation, extrapola- polation and extrapolation methods (Papadaki, 2017) tion and random noise. The augmentation methods were used to generate embedding level augmenta- manipulated the word embeddings to obtain a new set tions. In order to apply the augmentation techniques, of sentence representations. The work by Papadaki we tagged the sentences using NLTK (Bird et al., (Papadaki, 2017) also highlighted that interpolation 2009) POS tagger. We restricted the augmentations method performed comparatively better than the other only to nouns, adjectives and adverbs in each sen- methods like extrapolation and random noise. The tence. It was observed that by applying the restriction, work by Zhang and Yang (Zhang and Yang, 2018) we were in a better position to preserve the grammat- explored various perturbation methods where they in- ical and semantic structures of the original sentences troduced random perturbations like Gaussian noise or as compared to randomly replacing the words. GloVe Bernouli noise into the word embeddings in text re- word vectors (Pennington et al., 2014), which are a lated classification tasks such as sentence classifica- collection of pre-trained word vectors were used as tion, sentiment classification and relation classifica- word embeddings for words in our experiments. tion. We describe each of the augmentation methods in One of the recent works by Kobayashi the following subsections. (Kobayashi, 2018) trained a bi-directional lan- guage model conditioned on class labels where it 3.1 Similar Words Method predicted the probability of a word based on the surrounding context of the words. The words with This method by Wang and Yang (Wang and Yang, best probability values were taken into consideration 2015) of text data augmentation works by exploiting while generating the augmented sentences wherein the availability of similar words in the word embed- the words in the sentences were replaced in a ding space. We replaced words with their respec- 361 ICPRAM 2019 - 8th International Conference on Pattern Recognition Applications and Methods tive top scored similar words to generate new sen- tences. An example input sentence and the resulting augmented sentence
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-