
Text segmentation with character-level text embeddings Grzegorz Chrupa la [email protected] Tilburg Center for Cognition and Communication, Tilburg University, 5000 LE Tilburg, The Netherlands Abstract appropriate atomic unit to assume as input to linguis- tic analysis. In polysynthetic and agglutinative lan- Learning word representations has recently guages orthographic words are typically too large as a seen much success in computational linguis- basic unit, as they often correspond to whole English tics. However, assuming sequences of word phrases or sentences. Even in languages where the tokens as input to linguistic analysis is of- word is approximately the right level of granularity we ten unjustified. For many languages word often encounter text which is a mixture of natural lan- segmentation is a non-trivial task and natu- guage strings and other character data. One example rally occurring text is sometimes a mixture of is the type of text used for the experiments in this natural language strings and other character paper: posts on a question-answering forum, written data. We propose to learn text representa- in English, with segments of programming language tions directly from raw character sequences example code embedded within the text. by training a Simple Recurrent Network to predict the next character in text. The net- In order to address this issue we propose to induce text work uses its hidden layer to evolve abstract representations directly from raw character strings. representations of the character sequences it This sidesteps the issue of what counts as a word and sees. To demonstrate the usefulness of the whether orthographic words are the right level of gran- learned text embeddings, we use them as ularity. At the same time, we can elegantly deal with features in a supervised character level text character data which contains a mixture of languages, segmentation and labeling task: recognizing or domains, with differing characteristics. In our par- spans of text containing programming lan- ticular data, it would not be feasible to use words as guage code. By using the embeddings as fea- the basic units. In order to split text from this domain tures we are able to substantially improve into words, we first need to segment it into fragments over a baseline which uses only surface char- consisting of natural language versus fragments con- acter n-grams. sisting of programming code snippets, since different tokenization rules apply to each type of segment. Our representations correspond to the activation of 1. Introduction the hidden layer in a simple recurrent neural network The majority of representations of text used in com- (SRN) (Elman, 1990; 1991). The network is sequen- tially presented with raw text and learns to predict arXiv:1309.4628v1 [cs.CL] 18 Sep 2013 putational linguistics are based on words as the small- est units. Automatically induced word representations the next character in the sequence. It uses the units such as distributional word classes or distributed low- in the hidden layer to store a generalized representa- dimensional word embeddings have recently seen much tion of the recent history. After training the network attention and have been successfully used to provide on large amounts on unlabeled text, we can run it on generalization over surface word forms (Collobert & unseen character sequences, record the activation of Weston, 2008; Turian et al., 2010; Chrupa la, 2011; Col- the hidden layer and use it as a representation which lobert et al., 2011; Socher et al., 2012; Chen et al., generalizes over text strings. 2013). We test these representations on a character-level se- In some cases, however, words may be not the most quence labeling task. We collected a large number of posts to a programming question-answering forum Workshop on Deep Learning for Audio, Speech and Lan- which consist of English text with embedded code sam- guage Processing, ICML 2013. Copyright 2013 by the au- ples. Most of these code segments are delimited with thor(s). HTML tags and we use this markup to derive labels for Text segmentation with character-level text embeddings supervised learning. As a baseline we train a Condi- The components of the output vector are defined as: tional Random Field model with character n-gram fea- 0 J 1 tures We then compare to it the same baseline model X enriched with features derived from the learned SRN yk(t) = g @ sj(t)VkjA ; (3) text representations. We show that the generalization j=1 provided by the additional features substantially im- where g is the softmax function over the output com- proves performance: adding these features has similar ponents: effect to quadrupling the amount of training data given exp(z) to the baseline model. g(z) = P 0 ; (4) z0 exp(z ) and Vkj is the weight between hidden unit j and output 2. Simple Recurrent Networks unit k. Text-representations based on recurrent networks will SRN weights can be trained using backpropagation be discussed in full detail elsewhere. Here we provide through time (BPTT) (Rumelhart et al., 1986). With a compact overview of the aspects most relevant to the BPTT a recurrent network with n time steps is treated text segmentation task. as a feedforward network with n hidden layers with Simple recurrent neural networks (SRNs) were first in- weights shared at each level, and trained with stan- troduced by Elman(1990; 1991). The units in the hid- dard backpropagation. den layer at time t receive incoming connections from BPTT is known to be prone to problems with ex- the input units at time t and also from the hidden ploding or vanishing gradients. However, as shown by units at the previous time step t − 1. The hidden layer (Mikolov et al., 2010), for time-dependencies of mod- then predicts the state of the output units at the next erate length they are competitive when applied to lan- time step t + 1. The weights at each time step are guage modeling. Word-level SRN language models are shared. The recurrent connections endow the network state of the art, especially when used in combination with memory which allows it to store a representation with n-grams. of the history of the inputs received in the past. Our interest here, however, lies not so much in us- We denote the input layer as w, the hidden layer as s ing SRNs for language modeling per se, but rather in and the output layer as y. All these layers are indexed exploiting the representation that the SRN develops by the time parameter t: the input vector to the net- while learning to model language. Since it does not work at time t is w(t), the state of the hidden layer is have the capacity to store explicit history statistics s(t) and the output vector is y(t). like an n-gram model, it is forced to generalize over The input vector w(t) represents the input element at histories. As we shall see, the ability to create such current time step, in our case the current character. generalizations has uses which go beyond predicting The output vector y(t) represents the predicted prob- the next character in a string. abilities for the next character in the sequence. The activation of a hidden unit is a function of the 3. Recognizing and labeling code current input and the state of the hidden layer at the segments previous time step: t − 1: We argued in the Section1 that there are often cases where using words as the minimum units of analysis is I J ! undesirable or inapplicable. Here we focus on one such X X sj(t) = f wi(t)Uji + sj(t − 1)Wjl (1) scenario. Documents such as emails in a software de- i=1 l=1 velopment team or bug reports in an issue tracker are typically mostly written in a natural language (e.g. where f is the sigmoid function: English) but have also embedded within them frag- ments of programming source code, as well as other miscellaneous non-linguistic character data such as er- 1 f(a) = ; (2) ror messages, stack traces or program output. Fre- 1 + exp(−a) quently these documents are stored in a plain text for- mat, and the boundaries between these different text and Uji is the weight between input component i and segment, while evident to a human, are not explicitly hidden unit j, while Wjl is the weight between hidden indicated. When processing such documents it would unit l and hidden unit j. be useful to be able to preprocess them and recognize Text segmentation with character-level text embeddings data we can create a basic labeler by training a stan- Java - Convert String to enum dard linear chain Conditional Random Field (we use the Wapiti implementation of Lavergne et al.(2010)). Say I have an enum which is just public enum Blah { Our main interest here lies in determining how much A, B , C, D text representations learned by SRNs from unlabeled } data can help the performance of such a labeler. and I would like to find the enum value of a string up vote319down votefavorite 60 of for example "A" which would be Blah.A. How would it be possible to do this? 4. Experimental evaluation Is the Enum.ValueOf() the method I need? If We create the following disjoint subsets of data from so, how would I use this? java enums the Stackoverflow collection: Figure 1. Example Stackoverflow post. • 465 million characters unlabeled data set for learning text representations (called large)1 j O b O u O e O • 10 million characters training set. We use this s O O data (with labels) to train the CRF model.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages7 Page
-
File Size-